Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
143 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Analyzing COVID-19 Vaccination Sentiments in Nigerian Cyberspace: Insights from a Manually Annotated Twitter Dataset (2401.13133v1)

Published 23 Jan 2024 in cs.CL and cs.SI

Abstract: Numerous successes have been achieved in combating the COVID-19 pandemic, initially using various precautionary measures like lockdowns, social distancing, and the use of face masks. More recently, various vaccinations have been developed to aid in the prevention or reduction of the severity of the COVID-19 infection. Despite the effectiveness of the precautionary measures and the vaccines, there are several controversies that are massively shared on social media platforms like Twitter. In this paper, we explore the use of state-of-the-art transformer-based LLMs to study people's acceptance of vaccines in Nigeria. We developed a novel dataset by crawling multi-lingual tweets using relevant hashtags and keywords. Our analysis and visualizations revealed that most tweets expressed neutral sentiments about COVID-19 vaccines, with some individuals expressing positive views, and there was no strong preference for specific vaccine types, although Moderna received slightly more positive sentiment. We also found out that fine-tuning a pre-trained LLM with an appropriate dataset can yield competitive results, even if the LLM was not initially pre-trained on the specific language of that dataset.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (21)
  1. J. Cui, Z. Wang, S.-B. Ho, and E. Cambria, “Survey on sentiment analysis: evolution of research methods and topics,” Artificial Intelligence Review, pp. 1–42, 2023.
  2. A. Al-Hashedi, B. Al-Fuhaidi, A. M. Mohsen, Y. Ali, H. A. Gamal Al-Kaf, W. Al-Sorori, and N. Maqtary, “Ensemble classifiers for arabic sentiment analysis of social network (twitter data) towards covid-19-related conspiracy theories,” Applied Computational Intelligence and Soft Computing, vol. 2022, pp. 1–10, 2022.
  3. L. Li, J. Zhou, Z. Ma, M. T. Bensi, M. A. Hall, and G. B. Baecher, “Dynamic assessment of the covid-19 vaccine acceptance leveraging social media data,” Journal of Biomedical Informatics, vol. 129, p. 104054, 2022.
  4. G. Blanco and A. Lourenço, “Optimism and pessimism analysis using deep learning on covid-19 related twitter conversations,” Information processing & management, vol. 59, no. 3, p. 102918, 2022.
  5. H. Yin, X. Song, S. Yang, and J. Li, “Sentiment analysis and topic modeling for covid-19 vaccine discussions,” World Wide Web, vol. 25, no. 3, pp. 1067–1083, 2022.
  6. N. A. Obi-Ani, C. Anikwenze, and M. C. Isiani, “Social media and the covid-19 pandemic: Observations from nigeria,” Cogent arts & humanities, vol. 7, no. 1, p. 1799483, 2020.
  7. H. Adamu, S. L. Lutfi, N. H. A. H. Malim, R. Hassan, A. Di Vaio, and A. S. A. Mohamed, “Framing twitter public sentiment on nigerian government covid-19 palliatives distribution using machine learning,” Sustainability, vol. 13, no. 6, p. 3497, 2021.
  8. I. Aygün, B. Kaya, and M. Kaya, “Aspect based twitter sentiment analysis on vaccination and vaccine types in covid-19 pandemic with deep learning,” IEEE Journal of Biomedical and Health Informatics, vol. 26, no. 5, pp. 2360–2369, 2021.
  9. C. A. Melton, O. A. Olusanya, N. Ammar, and A. Shaban-Nejad, “Public sentiment analysis and topic modeling regarding covid-19 vaccines on the reddit social media platform: A call to action for strengthening vaccine confidence,” Journal of Infection and Public Health, vol. 14, no. 10, pp. 1505–1512, 2021.
  10. A. R. Rahmanti, C.-H. Chien, A. A. Nursetyo, A. Husnayain, B. S. Wiratama, A. Fuad, H.-C. Yang, and Y.-C. J. Li, “Social media sentiment analysis to monitor the performance of vaccination coverage during the early phase of the national covid-19 vaccine rollout,” Computer Methods and Programs in Biomedicine, vol. 221, p. 106838, 2022.
  11. T. Baldha, M. Mungalpara, P. Goradia, and S. Bharti, “Covid-19 vaccine tweets sentiment analysis and topic modelling for public opinion mining,” in 2021 International Conference on Artificial Intelligence and Machine Vision (AIMV).   IEEE, 2021, pp. 1–6.
  12. A. Sarirete, “A bibliometric analysis of covid-19 vaccines and sentiment analysis,” Procedia Computer Science, vol. 194, pp. 280–287, 2021.
  13. A. Fridman, R. Gershon, and A. Gneezy, “Covid-19 and vaccine hesitancy: A longitudinal study,” PloS one, vol. 16, no. 4, p. e0250123, 2021.
  14. M. Shah, A. Vasant, K. Gandhi, and S. Panesar, “Sentiment analysis of twitter tweets for covid-19 pfizer vaccines,” in 2023 7th International Conference on Intelligent Computing and Control Systems (ICICCS).   IEEE, 2023, pp. 929–937.
  15. A. M. Schoene, J. Ortega, S. Amir, and K. Church, “An example of (too much) hyper-parameter tuning in suicide ideation detection,” in Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, 2023, pp. 1158–1162.
  16. H. Jiang, P. He, W. Chen, X. Liu, J. Gao, and T. Zhao, “Smart: Robust and efficient fine-tuning for pre-trained natural language models through principled regularized optimization,” arXiv preprint arXiv:1911.03437, 2019.
  17. A. Keshavarzi Arshadi, J. Webb, M. Salem, E. Cruz, S. Calad-Thomson, N. Ghadirian, J. Collins, E. Diez-Cecilia, B. Kelly, H. Goodarzi et al., “Artificial intelligence for covid-19 drug discovery and vaccine development,” Frontiers in Artificial Intelligence, p. 65, 2020.
  18. J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2018.
  19. S. Cheon, T. Methiyothin, and I. Ahn, “Analysis of covid-19 vaccine adverse event using language model and unsupervised machine learning,” Plos one, vol. 18, no. 2, p. e0282119, 2023.
  20. J. Wei, M. Bosma, V. Y. Zhao, K. Guu, A. W. Yu, B. Lester, N. Du, A. M. Dai, and Q. V. Le, “Finetuned language models are zero-shot learners,” arXiv preprint arXiv:2109.01652, 2021.
  21. Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov, “Roberta: A robustly optimized bert pretraining approach,” arXiv preprint arXiv:1907.11692, 2019.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets