From Pro, Anti to Informative and Hesitant: An Infoveillance study of COVID-19 vaccines and vaccination discourse on Twitter (2403.09349v1)
Abstract: COVID-19 pandemic has brought unprecedented challenges to the world, and vaccination has been a key strategy to combat the disease. Since Twitter is one of the most widely used public microblogging platforms, researchers have analysed COVID-19 vaccines and vaccination Twitter discourse to explore the conversational dynamics around the topic. While contributing to the crisis informatics literature, we curate a large-scale geotagged Twitter dataset, GeoCovaxTweets Extended, and explore the discourse through multiple spatiotemporal analyses. This dataset covers a longer time span of 38 months, from the announcement of the first vaccine to the availability of booster doses. Results show that 43.4% of the collected tweets, although containing phrases and keywords related to vaccines and vaccinations, were unrelated to the COVID-19 context. In total, 23.1% of the discussions on vaccines and vaccinations were classified as Pro, 16% as Hesitant, 11.4% as Anti, and 6.1% as Informative. The trend shifted towards Pro and Informative tweets globally as vaccination programs progressed, indicating a change in the public's perception of COVID-19 vaccines and vaccination. Furthermore, we explored the discourse based on account attributes, i.e., followers counts and tweet counts. Results show a significant pattern of discourse differences. Our findings highlight the potential of harnessing a large-scale geotagged Twitter dataset to understand global public health communication and to inform targeted interventions aimed at addressing vaccine hesitancy.
- Usher, K., Durkin, J., Bhullar, N.: The covid-19 pandemic and mental health impacts. International journal of mental health nursing 29(3), 315 (2020) Gisondi et al. [2022] Gisondi, M.A., Barber, R., Faust, J.S., Raja, A., Strehlow, M.C., Westafer, L.M., Gottlieb, M.: A deadly infodemic: Social media and the power of COVID-19 misinformation. JMIR Publications Toronto, Canada (2022) Muric et al. [2021] Muric, G., Wu, Y., Ferrara, E., et al.: Covid-19 vaccine hesitancy on social media: building a public twitter data set of antivaccine content, vaccine misinformation, and conspiracies. JMIR public health and surveillance 7(11), 30642 (2021) Lazarus et al. [2022] Lazarus, J.V., Wyka, K., White, T.M., Picchio, C.A., Rabin, K., Ratzan, S.C., Parsons Leigh, J., Hu, J., El-Mohandes, A.: Revisiting covid-19 vaccine hesitancy around the world using data from 23 countries in 2021. Nature communications 13(1), 3801 (2022) Cotfas et al. [2021] Cotfas, L.-A., Delcea, C., Roxin, I., Ioanăş, C., Gherai, D.S., Tajariol, F.: The longest month: analyzing covid-19 vaccination opinions dynamics from tweets in the month following the first vaccine announcement. Ieee Access 9, 33203–33223 (2021) Delcea et al. [2022] Delcea, C., Cotfas, L.-A., Crăciun, L., Molănescu, A.G.: New wave of covid-19 vaccine opinions in the month the 3rd booster dose arrived. Vaccines 10(6), 881 (2022) Kwok et al. [2021] Kwok, S.W.H., Vadde, S.K., Wang, G.: Tweet topics and sentiments relating to covid-19 vaccination among australian twitter users: machine learning analysis. Journal of medical Internet research 23(5), 26953 (2021) Loomba et al. [2021] Loomba, S., Figueiredo, A., Piatek, S.J., Graaf, K., Larson, H.J.: Measuring the impact of covid-19 vaccine misinformation on vaccination intent in the uk and usa. Nature human behaviour 5(3), 337–348 (2021) DeVerna et al. [2021] DeVerna, M.R., Pierri, F., Truong, B.T., Bollenbacher, J., Axelrod, D., Loynes, N., Torres-Lugo, C., Yang, K.-C., Menczer, F., Bryden, J.: Covaxxy: A collection of english-language twitter posts about covid-19 vaccines. In: ICWSM, pp. 992–999 (2021) Chen et al. [2020] Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Gisondi, M.A., Barber, R., Faust, J.S., Raja, A., Strehlow, M.C., Westafer, L.M., Gottlieb, M.: A deadly infodemic: Social media and the power of COVID-19 misinformation. JMIR Publications Toronto, Canada (2022) Muric et al. [2021] Muric, G., Wu, Y., Ferrara, E., et al.: Covid-19 vaccine hesitancy on social media: building a public twitter data set of antivaccine content, vaccine misinformation, and conspiracies. JMIR public health and surveillance 7(11), 30642 (2021) Lazarus et al. [2022] Lazarus, J.V., Wyka, K., White, T.M., Picchio, C.A., Rabin, K., Ratzan, S.C., Parsons Leigh, J., Hu, J., El-Mohandes, A.: Revisiting covid-19 vaccine hesitancy around the world using data from 23 countries in 2021. Nature communications 13(1), 3801 (2022) Cotfas et al. [2021] Cotfas, L.-A., Delcea, C., Roxin, I., Ioanăş, C., Gherai, D.S., Tajariol, F.: The longest month: analyzing covid-19 vaccination opinions dynamics from tweets in the month following the first vaccine announcement. Ieee Access 9, 33203–33223 (2021) Delcea et al. [2022] Delcea, C., Cotfas, L.-A., Crăciun, L., Molănescu, A.G.: New wave of covid-19 vaccine opinions in the month the 3rd booster dose arrived. Vaccines 10(6), 881 (2022) Kwok et al. [2021] Kwok, S.W.H., Vadde, S.K., Wang, G.: Tweet topics and sentiments relating to covid-19 vaccination among australian twitter users: machine learning analysis. Journal of medical Internet research 23(5), 26953 (2021) Loomba et al. [2021] Loomba, S., Figueiredo, A., Piatek, S.J., Graaf, K., Larson, H.J.: Measuring the impact of covid-19 vaccine misinformation on vaccination intent in the uk and usa. Nature human behaviour 5(3), 337–348 (2021) DeVerna et al. [2021] DeVerna, M.R., Pierri, F., Truong, B.T., Bollenbacher, J., Axelrod, D., Loynes, N., Torres-Lugo, C., Yang, K.-C., Menczer, F., Bryden, J.: Covaxxy: A collection of english-language twitter posts about covid-19 vaccines. In: ICWSM, pp. 992–999 (2021) Chen et al. [2020] Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Muric, G., Wu, Y., Ferrara, E., et al.: Covid-19 vaccine hesitancy on social media: building a public twitter data set of antivaccine content, vaccine misinformation, and conspiracies. JMIR public health and surveillance 7(11), 30642 (2021) Lazarus et al. [2022] Lazarus, J.V., Wyka, K., White, T.M., Picchio, C.A., Rabin, K., Ratzan, S.C., Parsons Leigh, J., Hu, J., El-Mohandes, A.: Revisiting covid-19 vaccine hesitancy around the world using data from 23 countries in 2021. Nature communications 13(1), 3801 (2022) Cotfas et al. [2021] Cotfas, L.-A., Delcea, C., Roxin, I., Ioanăş, C., Gherai, D.S., Tajariol, F.: The longest month: analyzing covid-19 vaccination opinions dynamics from tweets in the month following the first vaccine announcement. Ieee Access 9, 33203–33223 (2021) Delcea et al. [2022] Delcea, C., Cotfas, L.-A., Crăciun, L., Molănescu, A.G.: New wave of covid-19 vaccine opinions in the month the 3rd booster dose arrived. Vaccines 10(6), 881 (2022) Kwok et al. [2021] Kwok, S.W.H., Vadde, S.K., Wang, G.: Tweet topics and sentiments relating to covid-19 vaccination among australian twitter users: machine learning analysis. Journal of medical Internet research 23(5), 26953 (2021) Loomba et al. [2021] Loomba, S., Figueiredo, A., Piatek, S.J., Graaf, K., Larson, H.J.: Measuring the impact of covid-19 vaccine misinformation on vaccination intent in the uk and usa. Nature human behaviour 5(3), 337–348 (2021) DeVerna et al. [2021] DeVerna, M.R., Pierri, F., Truong, B.T., Bollenbacher, J., Axelrod, D., Loynes, N., Torres-Lugo, C., Yang, K.-C., Menczer, F., Bryden, J.: Covaxxy: A collection of english-language twitter posts about covid-19 vaccines. In: ICWSM, pp. 992–999 (2021) Chen et al. [2020] Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lazarus, J.V., Wyka, K., White, T.M., Picchio, C.A., Rabin, K., Ratzan, S.C., Parsons Leigh, J., Hu, J., El-Mohandes, A.: Revisiting covid-19 vaccine hesitancy around the world using data from 23 countries in 2021. Nature communications 13(1), 3801 (2022) Cotfas et al. [2021] Cotfas, L.-A., Delcea, C., Roxin, I., Ioanăş, C., Gherai, D.S., Tajariol, F.: The longest month: analyzing covid-19 vaccination opinions dynamics from tweets in the month following the first vaccine announcement. Ieee Access 9, 33203–33223 (2021) Delcea et al. [2022] Delcea, C., Cotfas, L.-A., Crăciun, L., Molănescu, A.G.: New wave of covid-19 vaccine opinions in the month the 3rd booster dose arrived. Vaccines 10(6), 881 (2022) Kwok et al. [2021] Kwok, S.W.H., Vadde, S.K., Wang, G.: Tweet topics and sentiments relating to covid-19 vaccination among australian twitter users: machine learning analysis. Journal of medical Internet research 23(5), 26953 (2021) Loomba et al. [2021] Loomba, S., Figueiredo, A., Piatek, S.J., Graaf, K., Larson, H.J.: Measuring the impact of covid-19 vaccine misinformation on vaccination intent in the uk and usa. Nature human behaviour 5(3), 337–348 (2021) DeVerna et al. [2021] DeVerna, M.R., Pierri, F., Truong, B.T., Bollenbacher, J., Axelrod, D., Loynes, N., Torres-Lugo, C., Yang, K.-C., Menczer, F., Bryden, J.: Covaxxy: A collection of english-language twitter posts about covid-19 vaccines. In: ICWSM, pp. 992–999 (2021) Chen et al. [2020] Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Cotfas, L.-A., Delcea, C., Roxin, I., Ioanăş, C., Gherai, D.S., Tajariol, F.: The longest month: analyzing covid-19 vaccination opinions dynamics from tweets in the month following the first vaccine announcement. Ieee Access 9, 33203–33223 (2021) Delcea et al. [2022] Delcea, C., Cotfas, L.-A., Crăciun, L., Molănescu, A.G.: New wave of covid-19 vaccine opinions in the month the 3rd booster dose arrived. Vaccines 10(6), 881 (2022) Kwok et al. [2021] Kwok, S.W.H., Vadde, S.K., Wang, G.: Tweet topics and sentiments relating to covid-19 vaccination among australian twitter users: machine learning analysis. Journal of medical Internet research 23(5), 26953 (2021) Loomba et al. [2021] Loomba, S., Figueiredo, A., Piatek, S.J., Graaf, K., Larson, H.J.: Measuring the impact of covid-19 vaccine misinformation on vaccination intent in the uk and usa. Nature human behaviour 5(3), 337–348 (2021) DeVerna et al. [2021] DeVerna, M.R., Pierri, F., Truong, B.T., Bollenbacher, J., Axelrod, D., Loynes, N., Torres-Lugo, C., Yang, K.-C., Menczer, F., Bryden, J.: Covaxxy: A collection of english-language twitter posts about covid-19 vaccines. In: ICWSM, pp. 992–999 (2021) Chen et al. [2020] Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Delcea, C., Cotfas, L.-A., Crăciun, L., Molănescu, A.G.: New wave of covid-19 vaccine opinions in the month the 3rd booster dose arrived. Vaccines 10(6), 881 (2022) Kwok et al. [2021] Kwok, S.W.H., Vadde, S.K., Wang, G.: Tweet topics and sentiments relating to covid-19 vaccination among australian twitter users: machine learning analysis. Journal of medical Internet research 23(5), 26953 (2021) Loomba et al. [2021] Loomba, S., Figueiredo, A., Piatek, S.J., Graaf, K., Larson, H.J.: Measuring the impact of covid-19 vaccine misinformation on vaccination intent in the uk and usa. Nature human behaviour 5(3), 337–348 (2021) DeVerna et al. [2021] DeVerna, M.R., Pierri, F., Truong, B.T., Bollenbacher, J., Axelrod, D., Loynes, N., Torres-Lugo, C., Yang, K.-C., Menczer, F., Bryden, J.: Covaxxy: A collection of english-language twitter posts about covid-19 vaccines. In: ICWSM, pp. 992–999 (2021) Chen et al. [2020] Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Kwok, S.W.H., Vadde, S.K., Wang, G.: Tweet topics and sentiments relating to covid-19 vaccination among australian twitter users: machine learning analysis. Journal of medical Internet research 23(5), 26953 (2021) Loomba et al. [2021] Loomba, S., Figueiredo, A., Piatek, S.J., Graaf, K., Larson, H.J.: Measuring the impact of covid-19 vaccine misinformation on vaccination intent in the uk and usa. Nature human behaviour 5(3), 337–348 (2021) DeVerna et al. [2021] DeVerna, M.R., Pierri, F., Truong, B.T., Bollenbacher, J., Axelrod, D., Loynes, N., Torres-Lugo, C., Yang, K.-C., Menczer, F., Bryden, J.: Covaxxy: A collection of english-language twitter posts about covid-19 vaccines. In: ICWSM, pp. 992–999 (2021) Chen et al. [2020] Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Loomba, S., Figueiredo, A., Piatek, S.J., Graaf, K., Larson, H.J.: Measuring the impact of covid-19 vaccine misinformation on vaccination intent in the uk and usa. Nature human behaviour 5(3), 337–348 (2021) DeVerna et al. [2021] DeVerna, M.R., Pierri, F., Truong, B.T., Bollenbacher, J., Axelrod, D., Loynes, N., Torres-Lugo, C., Yang, K.-C., Menczer, F., Bryden, J.: Covaxxy: A collection of english-language twitter posts about covid-19 vaccines. In: ICWSM, pp. 992–999 (2021) Chen et al. [2020] Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) DeVerna, M.R., Pierri, F., Truong, B.T., Bollenbacher, J., Axelrod, D., Loynes, N., Torres-Lugo, C., Yang, K.-C., Menczer, F., Bryden, J.: Covaxxy: A collection of english-language twitter posts about covid-19 vaccines. In: ICWSM, pp. 992–999 (2021) Chen et al. [2020] Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010)
- Gisondi, M.A., Barber, R., Faust, J.S., Raja, A., Strehlow, M.C., Westafer, L.M., Gottlieb, M.: A deadly infodemic: Social media and the power of COVID-19 misinformation. JMIR Publications Toronto, Canada (2022) Muric et al. [2021] Muric, G., Wu, Y., Ferrara, E., et al.: Covid-19 vaccine hesitancy on social media: building a public twitter data set of antivaccine content, vaccine misinformation, and conspiracies. JMIR public health and surveillance 7(11), 30642 (2021) Lazarus et al. [2022] Lazarus, J.V., Wyka, K., White, T.M., Picchio, C.A., Rabin, K., Ratzan, S.C., Parsons Leigh, J., Hu, J., El-Mohandes, A.: Revisiting covid-19 vaccine hesitancy around the world using data from 23 countries in 2021. Nature communications 13(1), 3801 (2022) Cotfas et al. [2021] Cotfas, L.-A., Delcea, C., Roxin, I., Ioanăş, C., Gherai, D.S., Tajariol, F.: The longest month: analyzing covid-19 vaccination opinions dynamics from tweets in the month following the first vaccine announcement. Ieee Access 9, 33203–33223 (2021) Delcea et al. [2022] Delcea, C., Cotfas, L.-A., Crăciun, L., Molănescu, A.G.: New wave of covid-19 vaccine opinions in the month the 3rd booster dose arrived. Vaccines 10(6), 881 (2022) Kwok et al. [2021] Kwok, S.W.H., Vadde, S.K., Wang, G.: Tweet topics and sentiments relating to covid-19 vaccination among australian twitter users: machine learning analysis. Journal of medical Internet research 23(5), 26953 (2021) Loomba et al. [2021] Loomba, S., Figueiredo, A., Piatek, S.J., Graaf, K., Larson, H.J.: Measuring the impact of covid-19 vaccine misinformation on vaccination intent in the uk and usa. Nature human behaviour 5(3), 337–348 (2021) DeVerna et al. [2021] DeVerna, M.R., Pierri, F., Truong, B.T., Bollenbacher, J., Axelrod, D., Loynes, N., Torres-Lugo, C., Yang, K.-C., Menczer, F., Bryden, J.: Covaxxy: A collection of english-language twitter posts about covid-19 vaccines. In: ICWSM, pp. 992–999 (2021) Chen et al. [2020] Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Muric, G., Wu, Y., Ferrara, E., et al.: Covid-19 vaccine hesitancy on social media: building a public twitter data set of antivaccine content, vaccine misinformation, and conspiracies. JMIR public health and surveillance 7(11), 30642 (2021) Lazarus et al. [2022] Lazarus, J.V., Wyka, K., White, T.M., Picchio, C.A., Rabin, K., Ratzan, S.C., Parsons Leigh, J., Hu, J., El-Mohandes, A.: Revisiting covid-19 vaccine hesitancy around the world using data from 23 countries in 2021. Nature communications 13(1), 3801 (2022) Cotfas et al. [2021] Cotfas, L.-A., Delcea, C., Roxin, I., Ioanăş, C., Gherai, D.S., Tajariol, F.: The longest month: analyzing covid-19 vaccination opinions dynamics from tweets in the month following the first vaccine announcement. Ieee Access 9, 33203–33223 (2021) Delcea et al. [2022] Delcea, C., Cotfas, L.-A., Crăciun, L., Molănescu, A.G.: New wave of covid-19 vaccine opinions in the month the 3rd booster dose arrived. Vaccines 10(6), 881 (2022) Kwok et al. [2021] Kwok, S.W.H., Vadde, S.K., Wang, G.: Tweet topics and sentiments relating to covid-19 vaccination among australian twitter users: machine learning analysis. Journal of medical Internet research 23(5), 26953 (2021) Loomba et al. [2021] Loomba, S., Figueiredo, A., Piatek, S.J., Graaf, K., Larson, H.J.: Measuring the impact of covid-19 vaccine misinformation on vaccination intent in the uk and usa. Nature human behaviour 5(3), 337–348 (2021) DeVerna et al. [2021] DeVerna, M.R., Pierri, F., Truong, B.T., Bollenbacher, J., Axelrod, D., Loynes, N., Torres-Lugo, C., Yang, K.-C., Menczer, F., Bryden, J.: Covaxxy: A collection of english-language twitter posts about covid-19 vaccines. In: ICWSM, pp. 992–999 (2021) Chen et al. [2020] Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lazarus, J.V., Wyka, K., White, T.M., Picchio, C.A., Rabin, K., Ratzan, S.C., Parsons Leigh, J., Hu, J., El-Mohandes, A.: Revisiting covid-19 vaccine hesitancy around the world using data from 23 countries in 2021. Nature communications 13(1), 3801 (2022) Cotfas et al. [2021] Cotfas, L.-A., Delcea, C., Roxin, I., Ioanăş, C., Gherai, D.S., Tajariol, F.: The longest month: analyzing covid-19 vaccination opinions dynamics from tweets in the month following the first vaccine announcement. Ieee Access 9, 33203–33223 (2021) Delcea et al. [2022] Delcea, C., Cotfas, L.-A., Crăciun, L., Molănescu, A.G.: New wave of covid-19 vaccine opinions in the month the 3rd booster dose arrived. Vaccines 10(6), 881 (2022) Kwok et al. [2021] Kwok, S.W.H., Vadde, S.K., Wang, G.: Tweet topics and sentiments relating to covid-19 vaccination among australian twitter users: machine learning analysis. Journal of medical Internet research 23(5), 26953 (2021) Loomba et al. [2021] Loomba, S., Figueiredo, A., Piatek, S.J., Graaf, K., Larson, H.J.: Measuring the impact of covid-19 vaccine misinformation on vaccination intent in the uk and usa. Nature human behaviour 5(3), 337–348 (2021) DeVerna et al. [2021] DeVerna, M.R., Pierri, F., Truong, B.T., Bollenbacher, J., Axelrod, D., Loynes, N., Torres-Lugo, C., Yang, K.-C., Menczer, F., Bryden, J.: Covaxxy: A collection of english-language twitter posts about covid-19 vaccines. In: ICWSM, pp. 992–999 (2021) Chen et al. [2020] Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Cotfas, L.-A., Delcea, C., Roxin, I., Ioanăş, C., Gherai, D.S., Tajariol, F.: The longest month: analyzing covid-19 vaccination opinions dynamics from tweets in the month following the first vaccine announcement. Ieee Access 9, 33203–33223 (2021) Delcea et al. [2022] Delcea, C., Cotfas, L.-A., Crăciun, L., Molănescu, A.G.: New wave of covid-19 vaccine opinions in the month the 3rd booster dose arrived. Vaccines 10(6), 881 (2022) Kwok et al. [2021] Kwok, S.W.H., Vadde, S.K., Wang, G.: Tweet topics and sentiments relating to covid-19 vaccination among australian twitter users: machine learning analysis. Journal of medical Internet research 23(5), 26953 (2021) Loomba et al. [2021] Loomba, S., Figueiredo, A., Piatek, S.J., Graaf, K., Larson, H.J.: Measuring the impact of covid-19 vaccine misinformation on vaccination intent in the uk and usa. Nature human behaviour 5(3), 337–348 (2021) DeVerna et al. [2021] DeVerna, M.R., Pierri, F., Truong, B.T., Bollenbacher, J., Axelrod, D., Loynes, N., Torres-Lugo, C., Yang, K.-C., Menczer, F., Bryden, J.: Covaxxy: A collection of english-language twitter posts about covid-19 vaccines. In: ICWSM, pp. 992–999 (2021) Chen et al. [2020] Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Delcea, C., Cotfas, L.-A., Crăciun, L., Molănescu, A.G.: New wave of covid-19 vaccine opinions in the month the 3rd booster dose arrived. Vaccines 10(6), 881 (2022) Kwok et al. [2021] Kwok, S.W.H., Vadde, S.K., Wang, G.: Tweet topics and sentiments relating to covid-19 vaccination among australian twitter users: machine learning analysis. Journal of medical Internet research 23(5), 26953 (2021) Loomba et al. [2021] Loomba, S., Figueiredo, A., Piatek, S.J., Graaf, K., Larson, H.J.: Measuring the impact of covid-19 vaccine misinformation on vaccination intent in the uk and usa. Nature human behaviour 5(3), 337–348 (2021) DeVerna et al. [2021] DeVerna, M.R., Pierri, F., Truong, B.T., Bollenbacher, J., Axelrod, D., Loynes, N., Torres-Lugo, C., Yang, K.-C., Menczer, F., Bryden, J.: Covaxxy: A collection of english-language twitter posts about covid-19 vaccines. In: ICWSM, pp. 992–999 (2021) Chen et al. [2020] Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Kwok, S.W.H., Vadde, S.K., Wang, G.: Tweet topics and sentiments relating to covid-19 vaccination among australian twitter users: machine learning analysis. Journal of medical Internet research 23(5), 26953 (2021) Loomba et al. [2021] Loomba, S., Figueiredo, A., Piatek, S.J., Graaf, K., Larson, H.J.: Measuring the impact of covid-19 vaccine misinformation on vaccination intent in the uk and usa. Nature human behaviour 5(3), 337–348 (2021) DeVerna et al. [2021] DeVerna, M.R., Pierri, F., Truong, B.T., Bollenbacher, J., Axelrod, D., Loynes, N., Torres-Lugo, C., Yang, K.-C., Menczer, F., Bryden, J.: Covaxxy: A collection of english-language twitter posts about covid-19 vaccines. In: ICWSM, pp. 992–999 (2021) Chen et al. [2020] Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Loomba, S., Figueiredo, A., Piatek, S.J., Graaf, K., Larson, H.J.: Measuring the impact of covid-19 vaccine misinformation on vaccination intent in the uk and usa. Nature human behaviour 5(3), 337–348 (2021) DeVerna et al. [2021] DeVerna, M.R., Pierri, F., Truong, B.T., Bollenbacher, J., Axelrod, D., Loynes, N., Torres-Lugo, C., Yang, K.-C., Menczer, F., Bryden, J.: Covaxxy: A collection of english-language twitter posts about covid-19 vaccines. In: ICWSM, pp. 992–999 (2021) Chen et al. [2020] Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) DeVerna, M.R., Pierri, F., Truong, B.T., Bollenbacher, J., Axelrod, D., Loynes, N., Torres-Lugo, C., Yang, K.-C., Menczer, F., Bryden, J.: Covaxxy: A collection of english-language twitter posts about covid-19 vaccines. In: ICWSM, pp. 992–999 (2021) Chen et al. [2020] Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010)
- Muric, G., Wu, Y., Ferrara, E., et al.: Covid-19 vaccine hesitancy on social media: building a public twitter data set of antivaccine content, vaccine misinformation, and conspiracies. JMIR public health and surveillance 7(11), 30642 (2021) Lazarus et al. [2022] Lazarus, J.V., Wyka, K., White, T.M., Picchio, C.A., Rabin, K., Ratzan, S.C., Parsons Leigh, J., Hu, J., El-Mohandes, A.: Revisiting covid-19 vaccine hesitancy around the world using data from 23 countries in 2021. Nature communications 13(1), 3801 (2022) Cotfas et al. [2021] Cotfas, L.-A., Delcea, C., Roxin, I., Ioanăş, C., Gherai, D.S., Tajariol, F.: The longest month: analyzing covid-19 vaccination opinions dynamics from tweets in the month following the first vaccine announcement. Ieee Access 9, 33203–33223 (2021) Delcea et al. [2022] Delcea, C., Cotfas, L.-A., Crăciun, L., Molănescu, A.G.: New wave of covid-19 vaccine opinions in the month the 3rd booster dose arrived. Vaccines 10(6), 881 (2022) Kwok et al. [2021] Kwok, S.W.H., Vadde, S.K., Wang, G.: Tweet topics and sentiments relating to covid-19 vaccination among australian twitter users: machine learning analysis. Journal of medical Internet research 23(5), 26953 (2021) Loomba et al. [2021] Loomba, S., Figueiredo, A., Piatek, S.J., Graaf, K., Larson, H.J.: Measuring the impact of covid-19 vaccine misinformation on vaccination intent in the uk and usa. Nature human behaviour 5(3), 337–348 (2021) DeVerna et al. [2021] DeVerna, M.R., Pierri, F., Truong, B.T., Bollenbacher, J., Axelrod, D., Loynes, N., Torres-Lugo, C., Yang, K.-C., Menczer, F., Bryden, J.: Covaxxy: A collection of english-language twitter posts about covid-19 vaccines. In: ICWSM, pp. 992–999 (2021) Chen et al. [2020] Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lazarus, J.V., Wyka, K., White, T.M., Picchio, C.A., Rabin, K., Ratzan, S.C., Parsons Leigh, J., Hu, J., El-Mohandes, A.: Revisiting covid-19 vaccine hesitancy around the world using data from 23 countries in 2021. Nature communications 13(1), 3801 (2022) Cotfas et al. [2021] Cotfas, L.-A., Delcea, C., Roxin, I., Ioanăş, C., Gherai, D.S., Tajariol, F.: The longest month: analyzing covid-19 vaccination opinions dynamics from tweets in the month following the first vaccine announcement. Ieee Access 9, 33203–33223 (2021) Delcea et al. [2022] Delcea, C., Cotfas, L.-A., Crăciun, L., Molănescu, A.G.: New wave of covid-19 vaccine opinions in the month the 3rd booster dose arrived. Vaccines 10(6), 881 (2022) Kwok et al. [2021] Kwok, S.W.H., Vadde, S.K., Wang, G.: Tweet topics and sentiments relating to covid-19 vaccination among australian twitter users: machine learning analysis. Journal of medical Internet research 23(5), 26953 (2021) Loomba et al. [2021] Loomba, S., Figueiredo, A., Piatek, S.J., Graaf, K., Larson, H.J.: Measuring the impact of covid-19 vaccine misinformation on vaccination intent in the uk and usa. Nature human behaviour 5(3), 337–348 (2021) DeVerna et al. [2021] DeVerna, M.R., Pierri, F., Truong, B.T., Bollenbacher, J., Axelrod, D., Loynes, N., Torres-Lugo, C., Yang, K.-C., Menczer, F., Bryden, J.: Covaxxy: A collection of english-language twitter posts about covid-19 vaccines. In: ICWSM, pp. 992–999 (2021) Chen et al. [2020] Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Cotfas, L.-A., Delcea, C., Roxin, I., Ioanăş, C., Gherai, D.S., Tajariol, F.: The longest month: analyzing covid-19 vaccination opinions dynamics from tweets in the month following the first vaccine announcement. Ieee Access 9, 33203–33223 (2021) Delcea et al. [2022] Delcea, C., Cotfas, L.-A., Crăciun, L., Molănescu, A.G.: New wave of covid-19 vaccine opinions in the month the 3rd booster dose arrived. Vaccines 10(6), 881 (2022) Kwok et al. [2021] Kwok, S.W.H., Vadde, S.K., Wang, G.: Tweet topics and sentiments relating to covid-19 vaccination among australian twitter users: machine learning analysis. Journal of medical Internet research 23(5), 26953 (2021) Loomba et al. [2021] Loomba, S., Figueiredo, A., Piatek, S.J., Graaf, K., Larson, H.J.: Measuring the impact of covid-19 vaccine misinformation on vaccination intent in the uk and usa. Nature human behaviour 5(3), 337–348 (2021) DeVerna et al. [2021] DeVerna, M.R., Pierri, F., Truong, B.T., Bollenbacher, J., Axelrod, D., Loynes, N., Torres-Lugo, C., Yang, K.-C., Menczer, F., Bryden, J.: Covaxxy: A collection of english-language twitter posts about covid-19 vaccines. In: ICWSM, pp. 992–999 (2021) Chen et al. [2020] Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Delcea, C., Cotfas, L.-A., Crăciun, L., Molănescu, A.G.: New wave of covid-19 vaccine opinions in the month the 3rd booster dose arrived. Vaccines 10(6), 881 (2022) Kwok et al. [2021] Kwok, S.W.H., Vadde, S.K., Wang, G.: Tweet topics and sentiments relating to covid-19 vaccination among australian twitter users: machine learning analysis. Journal of medical Internet research 23(5), 26953 (2021) Loomba et al. [2021] Loomba, S., Figueiredo, A., Piatek, S.J., Graaf, K., Larson, H.J.: Measuring the impact of covid-19 vaccine misinformation on vaccination intent in the uk and usa. Nature human behaviour 5(3), 337–348 (2021) DeVerna et al. [2021] DeVerna, M.R., Pierri, F., Truong, B.T., Bollenbacher, J., Axelrod, D., Loynes, N., Torres-Lugo, C., Yang, K.-C., Menczer, F., Bryden, J.: Covaxxy: A collection of english-language twitter posts about covid-19 vaccines. In: ICWSM, pp. 992–999 (2021) Chen et al. [2020] Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Kwok, S.W.H., Vadde, S.K., Wang, G.: Tweet topics and sentiments relating to covid-19 vaccination among australian twitter users: machine learning analysis. Journal of medical Internet research 23(5), 26953 (2021) Loomba et al. [2021] Loomba, S., Figueiredo, A., Piatek, S.J., Graaf, K., Larson, H.J.: Measuring the impact of covid-19 vaccine misinformation on vaccination intent in the uk and usa. Nature human behaviour 5(3), 337–348 (2021) DeVerna et al. [2021] DeVerna, M.R., Pierri, F., Truong, B.T., Bollenbacher, J., Axelrod, D., Loynes, N., Torres-Lugo, C., Yang, K.-C., Menczer, F., Bryden, J.: Covaxxy: A collection of english-language twitter posts about covid-19 vaccines. In: ICWSM, pp. 992–999 (2021) Chen et al. [2020] Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Loomba, S., Figueiredo, A., Piatek, S.J., Graaf, K., Larson, H.J.: Measuring the impact of covid-19 vaccine misinformation on vaccination intent in the uk and usa. Nature human behaviour 5(3), 337–348 (2021) DeVerna et al. [2021] DeVerna, M.R., Pierri, F., Truong, B.T., Bollenbacher, J., Axelrod, D., Loynes, N., Torres-Lugo, C., Yang, K.-C., Menczer, F., Bryden, J.: Covaxxy: A collection of english-language twitter posts about covid-19 vaccines. In: ICWSM, pp. 992–999 (2021) Chen et al. [2020] Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) DeVerna, M.R., Pierri, F., Truong, B.T., Bollenbacher, J., Axelrod, D., Loynes, N., Torres-Lugo, C., Yang, K.-C., Menczer, F., Bryden, J.: Covaxxy: A collection of english-language twitter posts about covid-19 vaccines. In: ICWSM, pp. 992–999 (2021) Chen et al. [2020] Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010)
- Lazarus, J.V., Wyka, K., White, T.M., Picchio, C.A., Rabin, K., Ratzan, S.C., Parsons Leigh, J., Hu, J., El-Mohandes, A.: Revisiting covid-19 vaccine hesitancy around the world using data from 23 countries in 2021. Nature communications 13(1), 3801 (2022) Cotfas et al. [2021] Cotfas, L.-A., Delcea, C., Roxin, I., Ioanăş, C., Gherai, D.S., Tajariol, F.: The longest month: analyzing covid-19 vaccination opinions dynamics from tweets in the month following the first vaccine announcement. Ieee Access 9, 33203–33223 (2021) Delcea et al. [2022] Delcea, C., Cotfas, L.-A., Crăciun, L., Molănescu, A.G.: New wave of covid-19 vaccine opinions in the month the 3rd booster dose arrived. Vaccines 10(6), 881 (2022) Kwok et al. [2021] Kwok, S.W.H., Vadde, S.K., Wang, G.: Tweet topics and sentiments relating to covid-19 vaccination among australian twitter users: machine learning analysis. Journal of medical Internet research 23(5), 26953 (2021) Loomba et al. [2021] Loomba, S., Figueiredo, A., Piatek, S.J., Graaf, K., Larson, H.J.: Measuring the impact of covid-19 vaccine misinformation on vaccination intent in the uk and usa. Nature human behaviour 5(3), 337–348 (2021) DeVerna et al. [2021] DeVerna, M.R., Pierri, F., Truong, B.T., Bollenbacher, J., Axelrod, D., Loynes, N., Torres-Lugo, C., Yang, K.-C., Menczer, F., Bryden, J.: Covaxxy: A collection of english-language twitter posts about covid-19 vaccines. In: ICWSM, pp. 992–999 (2021) Chen et al. [2020] Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Cotfas, L.-A., Delcea, C., Roxin, I., Ioanăş, C., Gherai, D.S., Tajariol, F.: The longest month: analyzing covid-19 vaccination opinions dynamics from tweets in the month following the first vaccine announcement. Ieee Access 9, 33203–33223 (2021) Delcea et al. [2022] Delcea, C., Cotfas, L.-A., Crăciun, L., Molănescu, A.G.: New wave of covid-19 vaccine opinions in the month the 3rd booster dose arrived. Vaccines 10(6), 881 (2022) Kwok et al. [2021] Kwok, S.W.H., Vadde, S.K., Wang, G.: Tweet topics and sentiments relating to covid-19 vaccination among australian twitter users: machine learning analysis. Journal of medical Internet research 23(5), 26953 (2021) Loomba et al. [2021] Loomba, S., Figueiredo, A., Piatek, S.J., Graaf, K., Larson, H.J.: Measuring the impact of covid-19 vaccine misinformation on vaccination intent in the uk and usa. Nature human behaviour 5(3), 337–348 (2021) DeVerna et al. [2021] DeVerna, M.R., Pierri, F., Truong, B.T., Bollenbacher, J., Axelrod, D., Loynes, N., Torres-Lugo, C., Yang, K.-C., Menczer, F., Bryden, J.: Covaxxy: A collection of english-language twitter posts about covid-19 vaccines. In: ICWSM, pp. 992–999 (2021) Chen et al. [2020] Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Delcea, C., Cotfas, L.-A., Crăciun, L., Molănescu, A.G.: New wave of covid-19 vaccine opinions in the month the 3rd booster dose arrived. Vaccines 10(6), 881 (2022) Kwok et al. [2021] Kwok, S.W.H., Vadde, S.K., Wang, G.: Tweet topics and sentiments relating to covid-19 vaccination among australian twitter users: machine learning analysis. Journal of medical Internet research 23(5), 26953 (2021) Loomba et al. [2021] Loomba, S., Figueiredo, A., Piatek, S.J., Graaf, K., Larson, H.J.: Measuring the impact of covid-19 vaccine misinformation on vaccination intent in the uk and usa. Nature human behaviour 5(3), 337–348 (2021) DeVerna et al. [2021] DeVerna, M.R., Pierri, F., Truong, B.T., Bollenbacher, J., Axelrod, D., Loynes, N., Torres-Lugo, C., Yang, K.-C., Menczer, F., Bryden, J.: Covaxxy: A collection of english-language twitter posts about covid-19 vaccines. In: ICWSM, pp. 992–999 (2021) Chen et al. [2020] Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Kwok, S.W.H., Vadde, S.K., Wang, G.: Tweet topics and sentiments relating to covid-19 vaccination among australian twitter users: machine learning analysis. Journal of medical Internet research 23(5), 26953 (2021) Loomba et al. [2021] Loomba, S., Figueiredo, A., Piatek, S.J., Graaf, K., Larson, H.J.: Measuring the impact of covid-19 vaccine misinformation on vaccination intent in the uk and usa. Nature human behaviour 5(3), 337–348 (2021) DeVerna et al. [2021] DeVerna, M.R., Pierri, F., Truong, B.T., Bollenbacher, J., Axelrod, D., Loynes, N., Torres-Lugo, C., Yang, K.-C., Menczer, F., Bryden, J.: Covaxxy: A collection of english-language twitter posts about covid-19 vaccines. In: ICWSM, pp. 992–999 (2021) Chen et al. [2020] Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Loomba, S., Figueiredo, A., Piatek, S.J., Graaf, K., Larson, H.J.: Measuring the impact of covid-19 vaccine misinformation on vaccination intent in the uk and usa. Nature human behaviour 5(3), 337–348 (2021) DeVerna et al. [2021] DeVerna, M.R., Pierri, F., Truong, B.T., Bollenbacher, J., Axelrod, D., Loynes, N., Torres-Lugo, C., Yang, K.-C., Menczer, F., Bryden, J.: Covaxxy: A collection of english-language twitter posts about covid-19 vaccines. In: ICWSM, pp. 992–999 (2021) Chen et al. [2020] Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) DeVerna, M.R., Pierri, F., Truong, B.T., Bollenbacher, J., Axelrod, D., Loynes, N., Torres-Lugo, C., Yang, K.-C., Menczer, F., Bryden, J.: Covaxxy: A collection of english-language twitter posts about covid-19 vaccines. In: ICWSM, pp. 992–999 (2021) Chen et al. [2020] Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010)
- Cotfas, L.-A., Delcea, C., Roxin, I., Ioanăş, C., Gherai, D.S., Tajariol, F.: The longest month: analyzing covid-19 vaccination opinions dynamics from tweets in the month following the first vaccine announcement. Ieee Access 9, 33203–33223 (2021) Delcea et al. [2022] Delcea, C., Cotfas, L.-A., Crăciun, L., Molănescu, A.G.: New wave of covid-19 vaccine opinions in the month the 3rd booster dose arrived. Vaccines 10(6), 881 (2022) Kwok et al. [2021] Kwok, S.W.H., Vadde, S.K., Wang, G.: Tweet topics and sentiments relating to covid-19 vaccination among australian twitter users: machine learning analysis. Journal of medical Internet research 23(5), 26953 (2021) Loomba et al. [2021] Loomba, S., Figueiredo, A., Piatek, S.J., Graaf, K., Larson, H.J.: Measuring the impact of covid-19 vaccine misinformation on vaccination intent in the uk and usa. Nature human behaviour 5(3), 337–348 (2021) DeVerna et al. [2021] DeVerna, M.R., Pierri, F., Truong, B.T., Bollenbacher, J., Axelrod, D., Loynes, N., Torres-Lugo, C., Yang, K.-C., Menczer, F., Bryden, J.: Covaxxy: A collection of english-language twitter posts about covid-19 vaccines. In: ICWSM, pp. 992–999 (2021) Chen et al. [2020] Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Delcea, C., Cotfas, L.-A., Crăciun, L., Molănescu, A.G.: New wave of covid-19 vaccine opinions in the month the 3rd booster dose arrived. Vaccines 10(6), 881 (2022) Kwok et al. [2021] Kwok, S.W.H., Vadde, S.K., Wang, G.: Tweet topics and sentiments relating to covid-19 vaccination among australian twitter users: machine learning analysis. Journal of medical Internet research 23(5), 26953 (2021) Loomba et al. [2021] Loomba, S., Figueiredo, A., Piatek, S.J., Graaf, K., Larson, H.J.: Measuring the impact of covid-19 vaccine misinformation on vaccination intent in the uk and usa. Nature human behaviour 5(3), 337–348 (2021) DeVerna et al. [2021] DeVerna, M.R., Pierri, F., Truong, B.T., Bollenbacher, J., Axelrod, D., Loynes, N., Torres-Lugo, C., Yang, K.-C., Menczer, F., Bryden, J.: Covaxxy: A collection of english-language twitter posts about covid-19 vaccines. In: ICWSM, pp. 992–999 (2021) Chen et al. [2020] Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Kwok, S.W.H., Vadde, S.K., Wang, G.: Tweet topics and sentiments relating to covid-19 vaccination among australian twitter users: machine learning analysis. Journal of medical Internet research 23(5), 26953 (2021) Loomba et al. [2021] Loomba, S., Figueiredo, A., Piatek, S.J., Graaf, K., Larson, H.J.: Measuring the impact of covid-19 vaccine misinformation on vaccination intent in the uk and usa. Nature human behaviour 5(3), 337–348 (2021) DeVerna et al. [2021] DeVerna, M.R., Pierri, F., Truong, B.T., Bollenbacher, J., Axelrod, D., Loynes, N., Torres-Lugo, C., Yang, K.-C., Menczer, F., Bryden, J.: Covaxxy: A collection of english-language twitter posts about covid-19 vaccines. In: ICWSM, pp. 992–999 (2021) Chen et al. [2020] Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Loomba, S., Figueiredo, A., Piatek, S.J., Graaf, K., Larson, H.J.: Measuring the impact of covid-19 vaccine misinformation on vaccination intent in the uk and usa. Nature human behaviour 5(3), 337–348 (2021) DeVerna et al. [2021] DeVerna, M.R., Pierri, F., Truong, B.T., Bollenbacher, J., Axelrod, D., Loynes, N., Torres-Lugo, C., Yang, K.-C., Menczer, F., Bryden, J.: Covaxxy: A collection of english-language twitter posts about covid-19 vaccines. In: ICWSM, pp. 992–999 (2021) Chen et al. [2020] Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) DeVerna, M.R., Pierri, F., Truong, B.T., Bollenbacher, J., Axelrod, D., Loynes, N., Torres-Lugo, C., Yang, K.-C., Menczer, F., Bryden, J.: Covaxxy: A collection of english-language twitter posts about covid-19 vaccines. In: ICWSM, pp. 992–999 (2021) Chen et al. [2020] Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010)
- Delcea, C., Cotfas, L.-A., Crăciun, L., Molănescu, A.G.: New wave of covid-19 vaccine opinions in the month the 3rd booster dose arrived. Vaccines 10(6), 881 (2022) Kwok et al. [2021] Kwok, S.W.H., Vadde, S.K., Wang, G.: Tweet topics and sentiments relating to covid-19 vaccination among australian twitter users: machine learning analysis. Journal of medical Internet research 23(5), 26953 (2021) Loomba et al. [2021] Loomba, S., Figueiredo, A., Piatek, S.J., Graaf, K., Larson, H.J.: Measuring the impact of covid-19 vaccine misinformation on vaccination intent in the uk and usa. Nature human behaviour 5(3), 337–348 (2021) DeVerna et al. [2021] DeVerna, M.R., Pierri, F., Truong, B.T., Bollenbacher, J., Axelrod, D., Loynes, N., Torres-Lugo, C., Yang, K.-C., Menczer, F., Bryden, J.: Covaxxy: A collection of english-language twitter posts about covid-19 vaccines. In: ICWSM, pp. 992–999 (2021) Chen et al. [2020] Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Kwok, S.W.H., Vadde, S.K., Wang, G.: Tweet topics and sentiments relating to covid-19 vaccination among australian twitter users: machine learning analysis. Journal of medical Internet research 23(5), 26953 (2021) Loomba et al. [2021] Loomba, S., Figueiredo, A., Piatek, S.J., Graaf, K., Larson, H.J.: Measuring the impact of covid-19 vaccine misinformation on vaccination intent in the uk and usa. Nature human behaviour 5(3), 337–348 (2021) DeVerna et al. [2021] DeVerna, M.R., Pierri, F., Truong, B.T., Bollenbacher, J., Axelrod, D., Loynes, N., Torres-Lugo, C., Yang, K.-C., Menczer, F., Bryden, J.: Covaxxy: A collection of english-language twitter posts about covid-19 vaccines. In: ICWSM, pp. 992–999 (2021) Chen et al. [2020] Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Loomba, S., Figueiredo, A., Piatek, S.J., Graaf, K., Larson, H.J.: Measuring the impact of covid-19 vaccine misinformation on vaccination intent in the uk and usa. Nature human behaviour 5(3), 337–348 (2021) DeVerna et al. [2021] DeVerna, M.R., Pierri, F., Truong, B.T., Bollenbacher, J., Axelrod, D., Loynes, N., Torres-Lugo, C., Yang, K.-C., Menczer, F., Bryden, J.: Covaxxy: A collection of english-language twitter posts about covid-19 vaccines. In: ICWSM, pp. 992–999 (2021) Chen et al. [2020] Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) DeVerna, M.R., Pierri, F., Truong, B.T., Bollenbacher, J., Axelrod, D., Loynes, N., Torres-Lugo, C., Yang, K.-C., Menczer, F., Bryden, J.: Covaxxy: A collection of english-language twitter posts about covid-19 vaccines. In: ICWSM, pp. 992–999 (2021) Chen et al. [2020] Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010)
- Kwok, S.W.H., Vadde, S.K., Wang, G.: Tweet topics and sentiments relating to covid-19 vaccination among australian twitter users: machine learning analysis. Journal of medical Internet research 23(5), 26953 (2021) Loomba et al. [2021] Loomba, S., Figueiredo, A., Piatek, S.J., Graaf, K., Larson, H.J.: Measuring the impact of covid-19 vaccine misinformation on vaccination intent in the uk and usa. Nature human behaviour 5(3), 337–348 (2021) DeVerna et al. [2021] DeVerna, M.R., Pierri, F., Truong, B.T., Bollenbacher, J., Axelrod, D., Loynes, N., Torres-Lugo, C., Yang, K.-C., Menczer, F., Bryden, J.: Covaxxy: A collection of english-language twitter posts about covid-19 vaccines. In: ICWSM, pp. 992–999 (2021) Chen et al. [2020] Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Loomba, S., Figueiredo, A., Piatek, S.J., Graaf, K., Larson, H.J.: Measuring the impact of covid-19 vaccine misinformation on vaccination intent in the uk and usa. Nature human behaviour 5(3), 337–348 (2021) DeVerna et al. [2021] DeVerna, M.R., Pierri, F., Truong, B.T., Bollenbacher, J., Axelrod, D., Loynes, N., Torres-Lugo, C., Yang, K.-C., Menczer, F., Bryden, J.: Covaxxy: A collection of english-language twitter posts about covid-19 vaccines. In: ICWSM, pp. 992–999 (2021) Chen et al. [2020] Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) DeVerna, M.R., Pierri, F., Truong, B.T., Bollenbacher, J., Axelrod, D., Loynes, N., Torres-Lugo, C., Yang, K.-C., Menczer, F., Bryden, J.: Covaxxy: A collection of english-language twitter posts about covid-19 vaccines. In: ICWSM, pp. 992–999 (2021) Chen et al. [2020] Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010)
- Loomba, S., Figueiredo, A., Piatek, S.J., Graaf, K., Larson, H.J.: Measuring the impact of covid-19 vaccine misinformation on vaccination intent in the uk and usa. Nature human behaviour 5(3), 337–348 (2021) DeVerna et al. [2021] DeVerna, M.R., Pierri, F., Truong, B.T., Bollenbacher, J., Axelrod, D., Loynes, N., Torres-Lugo, C., Yang, K.-C., Menczer, F., Bryden, J.: Covaxxy: A collection of english-language twitter posts about covid-19 vaccines. In: ICWSM, pp. 992–999 (2021) Chen et al. [2020] Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) DeVerna, M.R., Pierri, F., Truong, B.T., Bollenbacher, J., Axelrod, D., Loynes, N., Torres-Lugo, C., Yang, K.-C., Menczer, F., Bryden, J.: Covaxxy: A collection of english-language twitter posts about covid-19 vaccines. In: ICWSM, pp. 992–999 (2021) Chen et al. [2020] Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010)
- DeVerna, M.R., Pierri, F., Truong, B.T., Bollenbacher, J., Axelrod, D., Loynes, N., Torres-Lugo, C., Yang, K.-C., Menczer, F., Bryden, J.: Covaxxy: A collection of english-language twitter posts about covid-19 vaccines. In: ICWSM, pp. 992–999 (2021) Chen et al. [2020] Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010)
- Chen, E., Lerman, K., Ferrara, E., et al.: Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR public health and surveillance 6(2), 19273 (2020) Banda et al. [2021] Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010)
- Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., Artemova, E., Tutubalina, E., Chowell, G.: A large-scale covid-19 twitter chatter dataset for open scientific research—an international collaboration. Epidemiologia 2(3), 315–324 (2021) Lamsal [2021] Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010)
- Lamsal, R.: Design and analysis of a large-scale covid-19 tweets dataset. applied intelligence 51, 2790–2804 (2021) Imran et al. [2022] Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010)
- Imran, M., Qazi, U., Ofli, F.: Tbcov: two billion multilingual covid-19 tweets with sentiment, entity, geo, and gender labels. Data 7(1), 8 (2022) Lamsal et al. [2023] Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010)
- Lamsal, R., Read, M.R., Karunasekera, S.: Billioncov: An enriched billion-scale collection of covid-19 tweets for efficient hydration. arXiv preprint arXiv:2301.11284 (2023) Chen et al. [2022] Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010)
- Chen, N., Chen, X., Pang, J.: A multilingual dataset of covid-19 vaccination attitudes on twitter. Data in Brief 44, 108503 (2022) Di Giovanni et al. [2022] Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010)
- Di Giovanni, M., Pierri, F., Torres-Lugo, C., Brambilla, M.: Vaccineu: Covid-19 vaccine conversations on twitter in french, german and italian. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 1236–1244 (2022) Hu et al. [2021] Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010)
- Hu, T., Wang, S., Luo, W., Zhang, M., Huang, X., Yan, Y., Liu, R., Ly, K., Kacker, V., She, B., et al.: Revealing public opinion towards covid-19 vaccines with twitter data in the united states: spatiotemporal perspective. Journal of Medical Internet Research 23(9), 30854 (2021) Sallam [2021] Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010)
- Sallam, M.: Covid-19 vaccine hesitancy worldwide: a concise systematic review of vaccine acceptance rates. Vaccines 9(2), 160 (2021) Bonnevie et al. [2021] Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010)
- Bonnevie, E., Gallegos-Jeffrey, A., Goldbarg, J., Byrd, B., Smyser, J.: Quantifying the rise of vaccine opposition on twitter during the covid-19 pandemic. Journal of communication in healthcare 14(1), 12–19 (2021) Khan et al. [2021] Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010)
- Khan, R., Rustam, F., Kanwal, K., Mehmood, A., Choi, G.S.: Us based covid-19 tweets sentiment analysis using textblob and supervised machine learning algorithms. In: 2021 International Conference on Artificial Intelligence (ICAI), pp. 1–8 (2021). IEEE Luo and Kejriwal [2022] Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010)
- Luo, Y., Kejriwal, M.: Understanding covid-19 vaccine reaction through comparative analysis on twitter. In: Intelligent Computing: Proceedings of the 2022 Computing Conference, Volume 1, pp. 846–864 (2022). Springer Qorib et al. [2023] Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010)
- Qorib, M., Oladunni, T., Denis, M., Ososanya, E., Cotae, P.: Covid-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on covid-19 vaccination twitter dataset. Expert Systems with Applications 212, 118715 (2023) Chandrasekaran et al. [2022] Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010)
- Chandrasekaran, R., Desai, R., Shah, H., Kumar, V., Moustakas, E., et al.: Examining public sentiments and attitudes toward covid-19 vaccination: infoveillance study using twitter posts. JMIR infodemiology 2(1), 33909 (2022) Poddar et al. [2022] Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010)
- Poddar, S., Mondal, M., Misra, J., Ganguly, N., Ghosh, S.: Winds of change: Impact of covid-19 on vaccine-related opinions of twitter users. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, pp. 782–793 (2022) Martínez et al. [2023] Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010)
- Martínez, R.Y., Blanco, G., Lourenço, A.: Spanish corpora of tweets about covid-19 vaccination for automatic stance detection. Information Processing & Management 60(3), 103294 (2023) Mu et al. [2023] Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010)
- Mu, Y., Jin, M., Grimshaw, C., Scarton, C., Bontcheva, K., Song, X.: Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 1052–1062 (2023) Singh et al. [2023] Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010)
- Singh, P., Lamsal, R., Chand, S., Shishodia, B., et al.: Geocovaxtweets: Covid-19 vaccines and vaccination-specific global geotagged twitter conversations. arXiv preprint arXiv:2301.07378 (2023) MacDonald et al. [2015] MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010)
- MacDonald, N.E., et al.: Vaccine hesitancy: Definition, scope and determinants. Vaccine 33(34), 4161–4164 (2015) Raffel et al. [2020] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010)
- Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010)
- Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017) Devlin et al. [2019] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010)
- Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423 . https://aclanthology.org/N19-1423 Liu et al. [2019] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010)
- Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Yang et al. [2019] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010)
- Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 32 (2019) Clark et al. [2020] Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010)
- Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020) Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010)
- Nguyen, D.Q., Vu, T., Tuan Nguyen, A.: BERTweet: A pre-trained language model for English tweets. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 9–14. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.2 . https://aclanthology.org/2020.emnlp-demos.2 Sanh et al. [2019] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010)
- Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019) Conneau et al. [2019] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010)
- Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019) Wolf et al. [2020] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010)
- Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., Rush, A.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6 . https://aclanthology.org/2020.emnlp-demos.6 Nguyen et al. [2020] Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010)
- Nguyen, D.Q., Vu, T., Nguyen, A.T.: Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200 (2020) Comito [2021] Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010)
- Comito, C.: How covid-19 information spread in us? the role of twitter as early indicator of epidemics. IEEE Transactions on Services Computing 15(3), 1193–1205 (2021) Lamsal et al. [2022a] Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010)
- Lamsal, R., Harwood, A., Read, M.R.: Addressing the location a/b problem on twitter: the next generation location inference research. In: Proceedings of the 6th ACM SIGSPATIAL International Workshop on Location-based Recommendations, Geosocial Networks and Geoadvertising, pp. 1–4 (2022) Lamsal et al. [2022b] Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010)
- Lamsal, R., Harwood, A., Read, M.: Where did you tweet from? inferring the origin locations of tweets based on contextual information. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 3935–3944. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/BigData55660.2022.10020460 Lamsal et al. [2022c] Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010)
- Lamsal, R., Harwood, A., Read, M.R.: Twitter conversations predict the daily confirmed covid-19 cases. Applied Soft Computing 129, 109603 (2022) Mathieu et al. [2021] Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010)
- Mathieu, E., Ritchie, H., Ortiz-Ospina, E., Roser, M., Hasell, J., Appel, C., Giattino, C., Rodés-Guirao, L.: A global database of covid-19 vaccinations. Nature human behaviour 5(7), 947–953 (2021) Sv et al. [2021] Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010)
- Sv, P., Tandon, J., Hinduja, H., et al.: Indian citizen’s perspective about side effects of covid-19 vaccine–a machine learning study. Diabetes & Metabolic Syndrome: Clinical Research & Reviews 15(4), 102172 (2021) Rogers [2010] Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010) Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010)
- Rogers, E.M.: Diffusion of Innovations. Simon and Schuster, ??? (2010)
- Pardeep Singh (4 papers)
- Rabindra Lamsal (14 papers)
- Monika Singh (5 papers)
- Satish Chand (6 papers)
- Bhawna Shishodia (2 papers)