Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

BillionCOV: An Enriched Billion-scale Collection of COVID-19 tweets for Efficient Hydration (2301.11284v2)

Published 18 Jan 2023 in cs.SI

Abstract: The COVID-19 pandemic introduced new norms such as social distancing, face masks, quarantine, lockdowns, travel restrictions, work/study from home, and business closures, to name a few. The pandemic's seriousness made people vocal on social media, especially on microblogs such as Twitter. Researchers have been collecting and sharing large-scale datasets of COVID-19 tweets since the early days of the outbreak. Sharing raw Twitter data with third parties is restricted; users need to hydrate tweet identifiers in a public dataset to re-create the dataset locally. Large-scale datasets that include original tweets, retweets, quotes, and replies have tweets in billions which takes months to hydrate. The existing datasets carry issues related to proportion and redundancy. We report that more than 500 million tweet identifiers point to deleted or protected tweets. In order to address these issues, this paper introduces an enriched global billion-scale English-language COVID-19 tweets dataset, BillionCOV, that contains 1.4 billion tweets originating from 240 countries and territories between October 2019 and April 2022. Importantly, BillionCOV facilitates researchers to filter tweet identifiers for efficient hydration. This paper discusses associated methods to fetch raw Twitter data for a set of tweet identifiers, presents multiple tweets' distributions to provide an overview of BillionCOV, and finally, reviews the dataset's potential use cases.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Rabindra Lamsal (14 papers)
  2. Maria Rodriguez Read (13 papers)
  3. Shanika Karunasekera (33 papers)
Citations (6)

Summary

We haven't generated a summary for this paper yet.