Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

How I Stopped Worrying about the Twitter Archive at the Library of Congress and Learned to Build a Little One for Myself (1611.08144v1)

Published 24 Nov 2016 in cs.CY, cs.DL, and cs.SI

Abstract: Twitter is among the commonest sources of data employed in social media research mainly because of its convenient APIs to collect tweets. However, most researchers do not have access to the expensive Firehose and Twitter Historical Archive, and they must rely on data collected with free APIs whose representativeness has been questioned. In 2010 the Library of Congress announced an agreement with Twitter to provide researchers access to the whole Twitter Archive. However, such a task proved to be daunting and, at the moment of this writing, no researcher has had the opportunity to access such materials. Still, there have been experiences that proved that smaller searchable archives are feasible and, therefore, amenable for academics to build with relatively little resources. In this paper I describe my efforts to build one of such archives, covering the first three years of Twitter (actually from March 2006 to July 2009) and containing 1.48 billion tweets. If you carefully follow my directions you may have your very own little Twitter Historical Archive and you may forget about paying for historical tweets. Please note that to achieve that you should be proficient in some programming language, knowable about Twitter APIs, and have some basic knowledge on ElasticSearch; moreover, you may very well get disappointed by the quality of the contents of the final dataset.

Citations (5)

Summary

We haven't generated a summary for this paper yet.