Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Novel BERT-based Classifier to Detect Political Leaning of YouTube Videos based on their Titles (2404.04261v1)

Published 16 Feb 2024 in cs.CL and cs.AI

Abstract: A quarter of US adults regularly get their news from YouTube. Yet, despite the massive political content available on the platform, to date no classifier has been proposed to identify the political leaning of YouTube videos. To fill this gap, we propose a novel classifier based on Bert -- a LLM from Google -- to classify YouTube videos merely based on their titles into six categories, namely: Far Left, Left, Center, Anti-Woke, Right, and Far Right. We used a public dataset of 10 million YouTube video titles (under various categories) to train and validate the proposed classifier. We compare the classifier against several alternatives that we trained on the same dataset, revealing that our classifier achieves the highest accuracy (75%) and the highest F1 score (77%). To further validate the classification performance, we collect videos from YouTube channels of numerous prominent news agencies, such as Fox News and New York Times, which have widely known political leanings, and apply our classifier to their video titles. For the vast majority of cases, the predicted political leaning matches that of the news agency.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (36)
  1. Ceci, L.: YouTube: Hours of video uploaded every minute 2022. https://www.statista.com/statistics/259477/hours-of-video-uploaded-to-youtube-every-minute/. Accessed: 2023-11-8 (2023) YouTube [2023] YouTube: YouTube for Press (2023). https://blog.youtube/press/ Konitzer et al. [2021] Konitzer, T., Allen, J., Eckman, S., Howland, B., Mobius, M., Rothschild, D., Watts, D.J.: Comparing estimates of news consumption from survey and passively collected behavioral data. Public Opinion Quarterly 85(S1), 347–370 (2021) Schomer [2020] Schomer, A.: US YouTube advertising 2020. eMarketer (2020). https://www.emarketer.com/content/us-youtube-advertising-2020/ D’Alonzo and Tegmark [2022] D’Alonzo, S., Tegmark, M.: Machine-learning media bias. Plos one 17(8), 0271947 (2022) Kulkarni et al. [2018] Kulkarni, V., Ye, J., Skiena, S., Wang, W.Y.: Multi-view models for political ideology detection of news articles. arXiv preprint arXiv:1809.03485 (2018) Li and Goldwasser [2019] Li, C., Goldwasser, D.: Encoding social information with graph convolutional networks for political perspective detection in news media. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2594–2604 (2019) Aksenov et al. [2021] Aksenov, D., Bourgonje, P., Zaczynska, K., Ostendorff, M., Schneider, J.M., Rehm, G.: Fine-grained classification of political bias in German news: A data set and initial experiments. In: Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), pp. 121–131 (2021) Gangula et al. [2019] Gangula, R.R.R., Duggenpudi, S.R., Mamidi, R.: Detecting political bias in news articles using headline attention. In: Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 77–84 (2019) Karpathy et al. [2014] Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters YouTube: YouTube for Press (2023). https://blog.youtube/press/ Konitzer et al. [2021] Konitzer, T., Allen, J., Eckman, S., Howland, B., Mobius, M., Rothschild, D., Watts, D.J.: Comparing estimates of news consumption from survey and passively collected behavioral data. Public Opinion Quarterly 85(S1), 347–370 (2021) Schomer [2020] Schomer, A.: US YouTube advertising 2020. eMarketer (2020). https://www.emarketer.com/content/us-youtube-advertising-2020/ D’Alonzo and Tegmark [2022] D’Alonzo, S., Tegmark, M.: Machine-learning media bias. Plos one 17(8), 0271947 (2022) Kulkarni et al. [2018] Kulkarni, V., Ye, J., Skiena, S., Wang, W.Y.: Multi-view models for political ideology detection of news articles. arXiv preprint arXiv:1809.03485 (2018) Li and Goldwasser [2019] Li, C., Goldwasser, D.: Encoding social information with graph convolutional networks for political perspective detection in news media. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2594–2604 (2019) Aksenov et al. [2021] Aksenov, D., Bourgonje, P., Zaczynska, K., Ostendorff, M., Schneider, J.M., Rehm, G.: Fine-grained classification of political bias in German news: A data set and initial experiments. In: Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), pp. 121–131 (2021) Gangula et al. [2019] Gangula, R.R.R., Duggenpudi, S.R., Mamidi, R.: Detecting political bias in news articles using headline attention. In: Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 77–84 (2019) Karpathy et al. [2014] Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Konitzer, T., Allen, J., Eckman, S., Howland, B., Mobius, M., Rothschild, D., Watts, D.J.: Comparing estimates of news consumption from survey and passively collected behavioral data. Public Opinion Quarterly 85(S1), 347–370 (2021) Schomer [2020] Schomer, A.: US YouTube advertising 2020. eMarketer (2020). https://www.emarketer.com/content/us-youtube-advertising-2020/ D’Alonzo and Tegmark [2022] D’Alonzo, S., Tegmark, M.: Machine-learning media bias. Plos one 17(8), 0271947 (2022) Kulkarni et al. [2018] Kulkarni, V., Ye, J., Skiena, S., Wang, W.Y.: Multi-view models for political ideology detection of news articles. arXiv preprint arXiv:1809.03485 (2018) Li and Goldwasser [2019] Li, C., Goldwasser, D.: Encoding social information with graph convolutional networks for political perspective detection in news media. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2594–2604 (2019) Aksenov et al. [2021] Aksenov, D., Bourgonje, P., Zaczynska, K., Ostendorff, M., Schneider, J.M., Rehm, G.: Fine-grained classification of political bias in German news: A data set and initial experiments. In: Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), pp. 121–131 (2021) Gangula et al. [2019] Gangula, R.R.R., Duggenpudi, S.R., Mamidi, R.: Detecting political bias in news articles using headline attention. In: Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 77–84 (2019) Karpathy et al. [2014] Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Schomer, A.: US YouTube advertising 2020. eMarketer (2020). https://www.emarketer.com/content/us-youtube-advertising-2020/ D’Alonzo and Tegmark [2022] D’Alonzo, S., Tegmark, M.: Machine-learning media bias. Plos one 17(8), 0271947 (2022) Kulkarni et al. [2018] Kulkarni, V., Ye, J., Skiena, S., Wang, W.Y.: Multi-view models for political ideology detection of news articles. arXiv preprint arXiv:1809.03485 (2018) Li and Goldwasser [2019] Li, C., Goldwasser, D.: Encoding social information with graph convolutional networks for political perspective detection in news media. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2594–2604 (2019) Aksenov et al. [2021] Aksenov, D., Bourgonje, P., Zaczynska, K., Ostendorff, M., Schneider, J.M., Rehm, G.: Fine-grained classification of political bias in German news: A data set and initial experiments. In: Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), pp. 121–131 (2021) Gangula et al. [2019] Gangula, R.R.R., Duggenpudi, S.R., Mamidi, R.: Detecting political bias in news articles using headline attention. In: Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 77–84 (2019) Karpathy et al. [2014] Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters D’Alonzo, S., Tegmark, M.: Machine-learning media bias. Plos one 17(8), 0271947 (2022) Kulkarni et al. [2018] Kulkarni, V., Ye, J., Skiena, S., Wang, W.Y.: Multi-view models for political ideology detection of news articles. arXiv preprint arXiv:1809.03485 (2018) Li and Goldwasser [2019] Li, C., Goldwasser, D.: Encoding social information with graph convolutional networks for political perspective detection in news media. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2594–2604 (2019) Aksenov et al. [2021] Aksenov, D., Bourgonje, P., Zaczynska, K., Ostendorff, M., Schneider, J.M., Rehm, G.: Fine-grained classification of political bias in German news: A data set and initial experiments. In: Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), pp. 121–131 (2021) Gangula et al. [2019] Gangula, R.R.R., Duggenpudi, S.R., Mamidi, R.: Detecting political bias in news articles using headline attention. In: Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 77–84 (2019) Karpathy et al. [2014] Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Kulkarni, V., Ye, J., Skiena, S., Wang, W.Y.: Multi-view models for political ideology detection of news articles. arXiv preprint arXiv:1809.03485 (2018) Li and Goldwasser [2019] Li, C., Goldwasser, D.: Encoding social information with graph convolutional networks for political perspective detection in news media. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2594–2604 (2019) Aksenov et al. [2021] Aksenov, D., Bourgonje, P., Zaczynska, K., Ostendorff, M., Schneider, J.M., Rehm, G.: Fine-grained classification of political bias in German news: A data set and initial experiments. In: Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), pp. 121–131 (2021) Gangula et al. [2019] Gangula, R.R.R., Duggenpudi, S.R., Mamidi, R.: Detecting political bias in news articles using headline attention. In: Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 77–84 (2019) Karpathy et al. [2014] Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Li, C., Goldwasser, D.: Encoding social information with graph convolutional networks for political perspective detection in news media. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2594–2604 (2019) Aksenov et al. [2021] Aksenov, D., Bourgonje, P., Zaczynska, K., Ostendorff, M., Schneider, J.M., Rehm, G.: Fine-grained classification of political bias in German news: A data set and initial experiments. In: Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), pp. 121–131 (2021) Gangula et al. [2019] Gangula, R.R.R., Duggenpudi, S.R., Mamidi, R.: Detecting political bias in news articles using headline attention. In: Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 77–84 (2019) Karpathy et al. [2014] Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Aksenov, D., Bourgonje, P., Zaczynska, K., Ostendorff, M., Schneider, J.M., Rehm, G.: Fine-grained classification of political bias in German news: A data set and initial experiments. In: Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), pp. 121–131 (2021) Gangula et al. [2019] Gangula, R.R.R., Duggenpudi, S.R., Mamidi, R.: Detecting political bias in news articles using headline attention. In: Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 77–84 (2019) Karpathy et al. [2014] Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Gangula, R.R.R., Duggenpudi, S.R., Mamidi, R.: Detecting political bias in news articles using headline attention. In: Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 77–84 (2019) Karpathy et al. [2014] Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters
  2. YouTube: YouTube for Press (2023). https://blog.youtube/press/ Konitzer et al. [2021] Konitzer, T., Allen, J., Eckman, S., Howland, B., Mobius, M., Rothschild, D., Watts, D.J.: Comparing estimates of news consumption from survey and passively collected behavioral data. Public Opinion Quarterly 85(S1), 347–370 (2021) Schomer [2020] Schomer, A.: US YouTube advertising 2020. eMarketer (2020). https://www.emarketer.com/content/us-youtube-advertising-2020/ D’Alonzo and Tegmark [2022] D’Alonzo, S., Tegmark, M.: Machine-learning media bias. Plos one 17(8), 0271947 (2022) Kulkarni et al. [2018] Kulkarni, V., Ye, J., Skiena, S., Wang, W.Y.: Multi-view models for political ideology detection of news articles. arXiv preprint arXiv:1809.03485 (2018) Li and Goldwasser [2019] Li, C., Goldwasser, D.: Encoding social information with graph convolutional networks for political perspective detection in news media. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2594–2604 (2019) Aksenov et al. [2021] Aksenov, D., Bourgonje, P., Zaczynska, K., Ostendorff, M., Schneider, J.M., Rehm, G.: Fine-grained classification of political bias in German news: A data set and initial experiments. In: Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), pp. 121–131 (2021) Gangula et al. [2019] Gangula, R.R.R., Duggenpudi, S.R., Mamidi, R.: Detecting political bias in news articles using headline attention. In: Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 77–84 (2019) Karpathy et al. [2014] Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Konitzer, T., Allen, J., Eckman, S., Howland, B., Mobius, M., Rothschild, D., Watts, D.J.: Comparing estimates of news consumption from survey and passively collected behavioral data. Public Opinion Quarterly 85(S1), 347–370 (2021) Schomer [2020] Schomer, A.: US YouTube advertising 2020. eMarketer (2020). https://www.emarketer.com/content/us-youtube-advertising-2020/ D’Alonzo and Tegmark [2022] D’Alonzo, S., Tegmark, M.: Machine-learning media bias. Plos one 17(8), 0271947 (2022) Kulkarni et al. [2018] Kulkarni, V., Ye, J., Skiena, S., Wang, W.Y.: Multi-view models for political ideology detection of news articles. arXiv preprint arXiv:1809.03485 (2018) Li and Goldwasser [2019] Li, C., Goldwasser, D.: Encoding social information with graph convolutional networks for political perspective detection in news media. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2594–2604 (2019) Aksenov et al. [2021] Aksenov, D., Bourgonje, P., Zaczynska, K., Ostendorff, M., Schneider, J.M., Rehm, G.: Fine-grained classification of political bias in German news: A data set and initial experiments. In: Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), pp. 121–131 (2021) Gangula et al. [2019] Gangula, R.R.R., Duggenpudi, S.R., Mamidi, R.: Detecting political bias in news articles using headline attention. In: Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 77–84 (2019) Karpathy et al. [2014] Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Schomer, A.: US YouTube advertising 2020. eMarketer (2020). https://www.emarketer.com/content/us-youtube-advertising-2020/ D’Alonzo and Tegmark [2022] D’Alonzo, S., Tegmark, M.: Machine-learning media bias. Plos one 17(8), 0271947 (2022) Kulkarni et al. [2018] Kulkarni, V., Ye, J., Skiena, S., Wang, W.Y.: Multi-view models for political ideology detection of news articles. arXiv preprint arXiv:1809.03485 (2018) Li and Goldwasser [2019] Li, C., Goldwasser, D.: Encoding social information with graph convolutional networks for political perspective detection in news media. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2594–2604 (2019) Aksenov et al. [2021] Aksenov, D., Bourgonje, P., Zaczynska, K., Ostendorff, M., Schneider, J.M., Rehm, G.: Fine-grained classification of political bias in German news: A data set and initial experiments. In: Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), pp. 121–131 (2021) Gangula et al. [2019] Gangula, R.R.R., Duggenpudi, S.R., Mamidi, R.: Detecting political bias in news articles using headline attention. In: Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 77–84 (2019) Karpathy et al. [2014] Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters D’Alonzo, S., Tegmark, M.: Machine-learning media bias. Plos one 17(8), 0271947 (2022) Kulkarni et al. [2018] Kulkarni, V., Ye, J., Skiena, S., Wang, W.Y.: Multi-view models for political ideology detection of news articles. arXiv preprint arXiv:1809.03485 (2018) Li and Goldwasser [2019] Li, C., Goldwasser, D.: Encoding social information with graph convolutional networks for political perspective detection in news media. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2594–2604 (2019) Aksenov et al. [2021] Aksenov, D., Bourgonje, P., Zaczynska, K., Ostendorff, M., Schneider, J.M., Rehm, G.: Fine-grained classification of political bias in German news: A data set and initial experiments. In: Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), pp. 121–131 (2021) Gangula et al. [2019] Gangula, R.R.R., Duggenpudi, S.R., Mamidi, R.: Detecting political bias in news articles using headline attention. In: Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 77–84 (2019) Karpathy et al. [2014] Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Kulkarni, V., Ye, J., Skiena, S., Wang, W.Y.: Multi-view models for political ideology detection of news articles. arXiv preprint arXiv:1809.03485 (2018) Li and Goldwasser [2019] Li, C., Goldwasser, D.: Encoding social information with graph convolutional networks for political perspective detection in news media. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2594–2604 (2019) Aksenov et al. [2021] Aksenov, D., Bourgonje, P., Zaczynska, K., Ostendorff, M., Schneider, J.M., Rehm, G.: Fine-grained classification of political bias in German news: A data set and initial experiments. In: Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), pp. 121–131 (2021) Gangula et al. [2019] Gangula, R.R.R., Duggenpudi, S.R., Mamidi, R.: Detecting political bias in news articles using headline attention. In: Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 77–84 (2019) Karpathy et al. [2014] Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Li, C., Goldwasser, D.: Encoding social information with graph convolutional networks for political perspective detection in news media. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2594–2604 (2019) Aksenov et al. [2021] Aksenov, D., Bourgonje, P., Zaczynska, K., Ostendorff, M., Schneider, J.M., Rehm, G.: Fine-grained classification of political bias in German news: A data set and initial experiments. In: Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), pp. 121–131 (2021) Gangula et al. [2019] Gangula, R.R.R., Duggenpudi, S.R., Mamidi, R.: Detecting political bias in news articles using headline attention. In: Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 77–84 (2019) Karpathy et al. [2014] Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Aksenov, D., Bourgonje, P., Zaczynska, K., Ostendorff, M., Schneider, J.M., Rehm, G.: Fine-grained classification of political bias in German news: A data set and initial experiments. In: Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), pp. 121–131 (2021) Gangula et al. [2019] Gangula, R.R.R., Duggenpudi, S.R., Mamidi, R.: Detecting political bias in news articles using headline attention. In: Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 77–84 (2019) Karpathy et al. [2014] Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Gangula, R.R.R., Duggenpudi, S.R., Mamidi, R.: Detecting political bias in news articles using headline attention. In: Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 77–84 (2019) Karpathy et al. [2014] Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters
  3. Konitzer, T., Allen, J., Eckman, S., Howland, B., Mobius, M., Rothschild, D., Watts, D.J.: Comparing estimates of news consumption from survey and passively collected behavioral data. Public Opinion Quarterly 85(S1), 347–370 (2021) Schomer [2020] Schomer, A.: US YouTube advertising 2020. eMarketer (2020). https://www.emarketer.com/content/us-youtube-advertising-2020/ D’Alonzo and Tegmark [2022] D’Alonzo, S., Tegmark, M.: Machine-learning media bias. Plos one 17(8), 0271947 (2022) Kulkarni et al. [2018] Kulkarni, V., Ye, J., Skiena, S., Wang, W.Y.: Multi-view models for political ideology detection of news articles. arXiv preprint arXiv:1809.03485 (2018) Li and Goldwasser [2019] Li, C., Goldwasser, D.: Encoding social information with graph convolutional networks for political perspective detection in news media. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2594–2604 (2019) Aksenov et al. [2021] Aksenov, D., Bourgonje, P., Zaczynska, K., Ostendorff, M., Schneider, J.M., Rehm, G.: Fine-grained classification of political bias in German news: A data set and initial experiments. In: Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), pp. 121–131 (2021) Gangula et al. [2019] Gangula, R.R.R., Duggenpudi, S.R., Mamidi, R.: Detecting political bias in news articles using headline attention. In: Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 77–84 (2019) Karpathy et al. [2014] Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Schomer, A.: US YouTube advertising 2020. eMarketer (2020). https://www.emarketer.com/content/us-youtube-advertising-2020/ D’Alonzo and Tegmark [2022] D’Alonzo, S., Tegmark, M.: Machine-learning media bias. Plos one 17(8), 0271947 (2022) Kulkarni et al. [2018] Kulkarni, V., Ye, J., Skiena, S., Wang, W.Y.: Multi-view models for political ideology detection of news articles. arXiv preprint arXiv:1809.03485 (2018) Li and Goldwasser [2019] Li, C., Goldwasser, D.: Encoding social information with graph convolutional networks for political perspective detection in news media. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2594–2604 (2019) Aksenov et al. [2021] Aksenov, D., Bourgonje, P., Zaczynska, K., Ostendorff, M., Schneider, J.M., Rehm, G.: Fine-grained classification of political bias in German news: A data set and initial experiments. In: Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), pp. 121–131 (2021) Gangula et al. [2019] Gangula, R.R.R., Duggenpudi, S.R., Mamidi, R.: Detecting political bias in news articles using headline attention. In: Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 77–84 (2019) Karpathy et al. [2014] Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters D’Alonzo, S., Tegmark, M.: Machine-learning media bias. Plos one 17(8), 0271947 (2022) Kulkarni et al. [2018] Kulkarni, V., Ye, J., Skiena, S., Wang, W.Y.: Multi-view models for political ideology detection of news articles. arXiv preprint arXiv:1809.03485 (2018) Li and Goldwasser [2019] Li, C., Goldwasser, D.: Encoding social information with graph convolutional networks for political perspective detection in news media. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2594–2604 (2019) Aksenov et al. [2021] Aksenov, D., Bourgonje, P., Zaczynska, K., Ostendorff, M., Schneider, J.M., Rehm, G.: Fine-grained classification of political bias in German news: A data set and initial experiments. In: Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), pp. 121–131 (2021) Gangula et al. [2019] Gangula, R.R.R., Duggenpudi, S.R., Mamidi, R.: Detecting political bias in news articles using headline attention. In: Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 77–84 (2019) Karpathy et al. [2014] Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Kulkarni, V., Ye, J., Skiena, S., Wang, W.Y.: Multi-view models for political ideology detection of news articles. arXiv preprint arXiv:1809.03485 (2018) Li and Goldwasser [2019] Li, C., Goldwasser, D.: Encoding social information with graph convolutional networks for political perspective detection in news media. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2594–2604 (2019) Aksenov et al. [2021] Aksenov, D., Bourgonje, P., Zaczynska, K., Ostendorff, M., Schneider, J.M., Rehm, G.: Fine-grained classification of political bias in German news: A data set and initial experiments. In: Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), pp. 121–131 (2021) Gangula et al. [2019] Gangula, R.R.R., Duggenpudi, S.R., Mamidi, R.: Detecting political bias in news articles using headline attention. In: Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 77–84 (2019) Karpathy et al. [2014] Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Li, C., Goldwasser, D.: Encoding social information with graph convolutional networks for political perspective detection in news media. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2594–2604 (2019) Aksenov et al. [2021] Aksenov, D., Bourgonje, P., Zaczynska, K., Ostendorff, M., Schneider, J.M., Rehm, G.: Fine-grained classification of political bias in German news: A data set and initial experiments. In: Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), pp. 121–131 (2021) Gangula et al. [2019] Gangula, R.R.R., Duggenpudi, S.R., Mamidi, R.: Detecting political bias in news articles using headline attention. In: Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 77–84 (2019) Karpathy et al. [2014] Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Aksenov, D., Bourgonje, P., Zaczynska, K., Ostendorff, M., Schneider, J.M., Rehm, G.: Fine-grained classification of political bias in German news: A data set and initial experiments. In: Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), pp. 121–131 (2021) Gangula et al. [2019] Gangula, R.R.R., Duggenpudi, S.R., Mamidi, R.: Detecting political bias in news articles using headline attention. In: Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 77–84 (2019) Karpathy et al. [2014] Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Gangula, R.R.R., Duggenpudi, S.R., Mamidi, R.: Detecting political bias in news articles using headline attention. In: Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 77–84 (2019) Karpathy et al. [2014] Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters
  4. Schomer, A.: US YouTube advertising 2020. eMarketer (2020). https://www.emarketer.com/content/us-youtube-advertising-2020/ D’Alonzo and Tegmark [2022] D’Alonzo, S., Tegmark, M.: Machine-learning media bias. Plos one 17(8), 0271947 (2022) Kulkarni et al. [2018] Kulkarni, V., Ye, J., Skiena, S., Wang, W.Y.: Multi-view models for political ideology detection of news articles. arXiv preprint arXiv:1809.03485 (2018) Li and Goldwasser [2019] Li, C., Goldwasser, D.: Encoding social information with graph convolutional networks for political perspective detection in news media. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2594–2604 (2019) Aksenov et al. [2021] Aksenov, D., Bourgonje, P., Zaczynska, K., Ostendorff, M., Schneider, J.M., Rehm, G.: Fine-grained classification of political bias in German news: A data set and initial experiments. In: Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), pp. 121–131 (2021) Gangula et al. [2019] Gangula, R.R.R., Duggenpudi, S.R., Mamidi, R.: Detecting political bias in news articles using headline attention. In: Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 77–84 (2019) Karpathy et al. [2014] Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters D’Alonzo, S., Tegmark, M.: Machine-learning media bias. Plos one 17(8), 0271947 (2022) Kulkarni et al. [2018] Kulkarni, V., Ye, J., Skiena, S., Wang, W.Y.: Multi-view models for political ideology detection of news articles. arXiv preprint arXiv:1809.03485 (2018) Li and Goldwasser [2019] Li, C., Goldwasser, D.: Encoding social information with graph convolutional networks for political perspective detection in news media. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2594–2604 (2019) Aksenov et al. [2021] Aksenov, D., Bourgonje, P., Zaczynska, K., Ostendorff, M., Schneider, J.M., Rehm, G.: Fine-grained classification of political bias in German news: A data set and initial experiments. In: Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), pp. 121–131 (2021) Gangula et al. [2019] Gangula, R.R.R., Duggenpudi, S.R., Mamidi, R.: Detecting political bias in news articles using headline attention. In: Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 77–84 (2019) Karpathy et al. [2014] Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Kulkarni, V., Ye, J., Skiena, S., Wang, W.Y.: Multi-view models for political ideology detection of news articles. arXiv preprint arXiv:1809.03485 (2018) Li and Goldwasser [2019] Li, C., Goldwasser, D.: Encoding social information with graph convolutional networks for political perspective detection in news media. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2594–2604 (2019) Aksenov et al. [2021] Aksenov, D., Bourgonje, P., Zaczynska, K., Ostendorff, M., Schneider, J.M., Rehm, G.: Fine-grained classification of political bias in German news: A data set and initial experiments. In: Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), pp. 121–131 (2021) Gangula et al. [2019] Gangula, R.R.R., Duggenpudi, S.R., Mamidi, R.: Detecting political bias in news articles using headline attention. In: Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 77–84 (2019) Karpathy et al. [2014] Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Li, C., Goldwasser, D.: Encoding social information with graph convolutional networks for political perspective detection in news media. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2594–2604 (2019) Aksenov et al. [2021] Aksenov, D., Bourgonje, P., Zaczynska, K., Ostendorff, M., Schneider, J.M., Rehm, G.: Fine-grained classification of political bias in German news: A data set and initial experiments. In: Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), pp. 121–131 (2021) Gangula et al. [2019] Gangula, R.R.R., Duggenpudi, S.R., Mamidi, R.: Detecting political bias in news articles using headline attention. In: Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 77–84 (2019) Karpathy et al. [2014] Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Aksenov, D., Bourgonje, P., Zaczynska, K., Ostendorff, M., Schneider, J.M., Rehm, G.: Fine-grained classification of political bias in German news: A data set and initial experiments. In: Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), pp. 121–131 (2021) Gangula et al. [2019] Gangula, R.R.R., Duggenpudi, S.R., Mamidi, R.: Detecting political bias in news articles using headline attention. In: Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 77–84 (2019) Karpathy et al. [2014] Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Gangula, R.R.R., Duggenpudi, S.R., Mamidi, R.: Detecting political bias in news articles using headline attention. In: Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 77–84 (2019) Karpathy et al. [2014] Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters
  5. D’Alonzo, S., Tegmark, M.: Machine-learning media bias. Plos one 17(8), 0271947 (2022) Kulkarni et al. [2018] Kulkarni, V., Ye, J., Skiena, S., Wang, W.Y.: Multi-view models for political ideology detection of news articles. arXiv preprint arXiv:1809.03485 (2018) Li and Goldwasser [2019] Li, C., Goldwasser, D.: Encoding social information with graph convolutional networks for political perspective detection in news media. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2594–2604 (2019) Aksenov et al. [2021] Aksenov, D., Bourgonje, P., Zaczynska, K., Ostendorff, M., Schneider, J.M., Rehm, G.: Fine-grained classification of political bias in German news: A data set and initial experiments. In: Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), pp. 121–131 (2021) Gangula et al. [2019] Gangula, R.R.R., Duggenpudi, S.R., Mamidi, R.: Detecting political bias in news articles using headline attention. In: Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 77–84 (2019) Karpathy et al. [2014] Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Kulkarni, V., Ye, J., Skiena, S., Wang, W.Y.: Multi-view models for political ideology detection of news articles. arXiv preprint arXiv:1809.03485 (2018) Li and Goldwasser [2019] Li, C., Goldwasser, D.: Encoding social information with graph convolutional networks for political perspective detection in news media. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2594–2604 (2019) Aksenov et al. [2021] Aksenov, D., Bourgonje, P., Zaczynska, K., Ostendorff, M., Schneider, J.M., Rehm, G.: Fine-grained classification of political bias in German news: A data set and initial experiments. In: Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), pp. 121–131 (2021) Gangula et al. [2019] Gangula, R.R.R., Duggenpudi, S.R., Mamidi, R.: Detecting political bias in news articles using headline attention. In: Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 77–84 (2019) Karpathy et al. [2014] Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Li, C., Goldwasser, D.: Encoding social information with graph convolutional networks for political perspective detection in news media. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2594–2604 (2019) Aksenov et al. [2021] Aksenov, D., Bourgonje, P., Zaczynska, K., Ostendorff, M., Schneider, J.M., Rehm, G.: Fine-grained classification of political bias in German news: A data set and initial experiments. In: Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), pp. 121–131 (2021) Gangula et al. [2019] Gangula, R.R.R., Duggenpudi, S.R., Mamidi, R.: Detecting political bias in news articles using headline attention. In: Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 77–84 (2019) Karpathy et al. [2014] Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Aksenov, D., Bourgonje, P., Zaczynska, K., Ostendorff, M., Schneider, J.M., Rehm, G.: Fine-grained classification of political bias in German news: A data set and initial experiments. In: Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), pp. 121–131 (2021) Gangula et al. [2019] Gangula, R.R.R., Duggenpudi, S.R., Mamidi, R.: Detecting political bias in news articles using headline attention. In: Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 77–84 (2019) Karpathy et al. [2014] Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Gangula, R.R.R., Duggenpudi, S.R., Mamidi, R.: Detecting political bias in news articles using headline attention. In: Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 77–84 (2019) Karpathy et al. [2014] Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters
  6. Kulkarni, V., Ye, J., Skiena, S., Wang, W.Y.: Multi-view models for political ideology detection of news articles. arXiv preprint arXiv:1809.03485 (2018) Li and Goldwasser [2019] Li, C., Goldwasser, D.: Encoding social information with graph convolutional networks for political perspective detection in news media. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2594–2604 (2019) Aksenov et al. [2021] Aksenov, D., Bourgonje, P., Zaczynska, K., Ostendorff, M., Schneider, J.M., Rehm, G.: Fine-grained classification of political bias in German news: A data set and initial experiments. In: Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), pp. 121–131 (2021) Gangula et al. [2019] Gangula, R.R.R., Duggenpudi, S.R., Mamidi, R.: Detecting political bias in news articles using headline attention. In: Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 77–84 (2019) Karpathy et al. [2014] Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Li, C., Goldwasser, D.: Encoding social information with graph convolutional networks for political perspective detection in news media. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2594–2604 (2019) Aksenov et al. [2021] Aksenov, D., Bourgonje, P., Zaczynska, K., Ostendorff, M., Schneider, J.M., Rehm, G.: Fine-grained classification of political bias in German news: A data set and initial experiments. In: Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), pp. 121–131 (2021) Gangula et al. [2019] Gangula, R.R.R., Duggenpudi, S.R., Mamidi, R.: Detecting political bias in news articles using headline attention. In: Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 77–84 (2019) Karpathy et al. [2014] Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Aksenov, D., Bourgonje, P., Zaczynska, K., Ostendorff, M., Schneider, J.M., Rehm, G.: Fine-grained classification of political bias in German news: A data set and initial experiments. In: Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), pp. 121–131 (2021) Gangula et al. [2019] Gangula, R.R.R., Duggenpudi, S.R., Mamidi, R.: Detecting political bias in news articles using headline attention. In: Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 77–84 (2019) Karpathy et al. [2014] Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Gangula, R.R.R., Duggenpudi, S.R., Mamidi, R.: Detecting political bias in news articles using headline attention. In: Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 77–84 (2019) Karpathy et al. [2014] Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters
  7. Li, C., Goldwasser, D.: Encoding social information with graph convolutional networks for political perspective detection in news media. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2594–2604 (2019) Aksenov et al. [2021] Aksenov, D., Bourgonje, P., Zaczynska, K., Ostendorff, M., Schneider, J.M., Rehm, G.: Fine-grained classification of political bias in German news: A data set and initial experiments. In: Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), pp. 121–131 (2021) Gangula et al. [2019] Gangula, R.R.R., Duggenpudi, S.R., Mamidi, R.: Detecting political bias in news articles using headline attention. In: Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 77–84 (2019) Karpathy et al. [2014] Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Aksenov, D., Bourgonje, P., Zaczynska, K., Ostendorff, M., Schneider, J.M., Rehm, G.: Fine-grained classification of political bias in German news: A data set and initial experiments. In: Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), pp. 121–131 (2021) Gangula et al. [2019] Gangula, R.R.R., Duggenpudi, S.R., Mamidi, R.: Detecting political bias in news articles using headline attention. In: Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 77–84 (2019) Karpathy et al. [2014] Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Gangula, R.R.R., Duggenpudi, S.R., Mamidi, R.: Detecting political bias in news articles using headline attention. In: Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 77–84 (2019) Karpathy et al. [2014] Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters
  8. Aksenov, D., Bourgonje, P., Zaczynska, K., Ostendorff, M., Schneider, J.M., Rehm, G.: Fine-grained classification of political bias in German news: A data set and initial experiments. In: Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), pp. 121–131 (2021) Gangula et al. [2019] Gangula, R.R.R., Duggenpudi, S.R., Mamidi, R.: Detecting political bias in news articles using headline attention. In: Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 77–84 (2019) Karpathy et al. [2014] Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Gangula, R.R.R., Duggenpudi, S.R., Mamidi, R.: Detecting political bias in news articles using headline attention. In: Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 77–84 (2019) Karpathy et al. [2014] Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters
  9. Gangula, R.R.R., Duggenpudi, S.R., Mamidi, R.: Detecting political bias in news articles using headline attention. In: Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 77–84 (2019) Karpathy et al. [2014] Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters
  10. Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014) Abu-El-Haija et al. [2016] Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters
  11. Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: YouTube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) Kalra et al. [2019] Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters
  12. Kalra, G.S., Kathuria, R.S., Kumar, A.: YouTube video classification based on title and description text. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 74–79 (2019). IEEE Savigny and Purwarianti [2017] Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters
  13. Savigny, J., Purwarianti, A.: Emotion classification on YouTube comments using word embedding. In: 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), pp. 1–5 (2017). IEEE Mock et al. [2022] Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters
  14. Mock, F., Kretschmer, F., Kriese, A., Böcker, S., Marz, M.: Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proceedings of the National Academy of Sciences 119(35), 2122636119 (2022) Lee et al. [2020] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters
  15. Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020) Beltagy et al. [2019] Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters
  16. Beltagy, I., Cohan, A., Lo, K.S.: Pretrained contextualized embeddings for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3–7 (2019) Peng et al. [2019] Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters
  17. Peng, Y., Yan, S., Lu, Z.: Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474 (2019) Hosseinmardi et al. [2021] Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters
  18. Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D.M., Watts, D.J.: Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118(32), 2101967118 (2021) Ledwich and Zaitsev [1912] Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters
  19. Ledwich, M., Zaitsev, A.: Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. arXiv (1912) Ribeiro et al. [2019] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters
  20. Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., Jr., W.M.: Auditing Radicalization Pathways on YouTube. CoRR abs/1908.08313 (2019) 1908.08313 Gu and Jiang [2021] Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters
  21. Gu, F., Jiang, D.: Prediction of political leanings of chinese speaking twitter users. In: 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), pp. 286–289 (2021). IEEE Tasnim et al. [2021] Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters
  22. Tasnim, Z., Ahmed, S., Rahman, A., Sorna, J.F., Rahman, M.: Political ideology prediction from bengali text using word embedding models. In: 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 724–727 (2021). https://doi.org/10.1109/ESCI50559.2021.9396875 Xiao et al. [2023] Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters
  23. Xiao, Z., Zhu, J., Wang, Y., Zhou, P., Lam, W.H., Porter, M.A., Sun, Y.: Detecting political biases of named entities and hashtags on twitter. EPJ Data Science 12(1), 20 (2023) Di Gennaro et al. [2021] Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters
  24. Di Gennaro, G., Buonanno, A., Palmieri, F.A.: Considerations about learning word2vec. The Journal of Supercomputing, 1–16 (2021) Essa et al. [2023] Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters
  25. Essa, E., Omar, K., Alqahtani, A.: Fake news detection based on a hybrid bert and lightgbm models. Complex & Intelligent Systems, 1–12 (2023) Shen and Liu [2021] Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters
  26. Shen, Y., Liu, J.: Comparison of text sentiment analysis based on bert and word2vec. In: 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), pp. 144–147 (2021). IEEE Wang et al. [2020] Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters
  27. Wang, C., Nulty, P., Lillis, D.: A comparative study on word embeddings in deep learning for text classification. In: Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval, pp. 37–46 (2020) Jiang et al. [2023] Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters
  28. Jiang, J., Ren, X., Ferrara, E.: Retweet-bert: political leaning detection using language features and information diffusion on social networks. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 459–469 (2023) Nyhan et al. [2023] Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters
  29. Nyhan, B., Settle, J., Thorson, E., Wojcieszak, M., Barberá, P., Chen, A.Y., Allcott, H., Brown, T., Crespo-Tenorio, A., Dimmery, D., et al.: Like-minded sources on facebook are prevalent but not polarizing. Nature 620(7972), 137–144 (2023) Mikolov et al. [2013] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters
  30. Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters
  31. Pennington, J., Socher, R., Manning, C.D.: GloVe: Global Vectors for Word Representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162 Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters
  32. Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Ibrahim et al. [2023] Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters
  33. Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., Zaki, Y.: YouTube’s recommendation algorithm is left-leaning in the United States. PNAS nexus 2(8), 264 (2023) Fernando and Tsokos [2021] Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters
  34. Fernando, K.R.M., Tsokos, C.P.: Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7), 2940–2951 (2021) [36] BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters
  35. BERT (Bidirectional Encoder Representations from Transformers). https://github.com/tensorflow/models/tree/master/official/legacy/bert. Accessed: 2023-11-14 [37] AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters
  36. AllSides Media Bias Chart (2023). https://www.allsides.com/media-bias/media-bias-chart#biasmatters
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Nouar AlDahoul (18 papers)
  2. Talal Rahwan (46 papers)
  3. Yasir Zaki (38 papers)