Improved methodology for longitudinal Web analytics using Common Crawl (2404.09770v1)
Abstract: Common Crawl is a multi-petabyte longitudinal dataset containing over 100 billion web pages which is widely used as a source of language data for sequence model training and in web science research. Each of its constituent archives is on the order of 75TB in size. Using it for research, particularly longitudinal studies, which necessarily involve multiple archives, is therefore very expensive in terms of compute time and storage space and/or web bandwidth. Two new methods for mitigating this problem are presented here, based on exploiting and extending the much smaller (<200 gigabytes (GB) compressed) index which is available for each archive. By adding Last-Modified timestamps to the index we enable longitudinal exploration using only a single archive. By comparing the distribution of index features for each of the 100 segments into which archive is divided with their distribution over the whole archive, we have identified the least and most representative segments for a number of recent archives. Using this allows the segment(s) that are most representative of an archive to be used as proxies for the whole. We illustrate this approach in an analysis of changes in URI length over time, leading to an unanticipated insight into the how the creation of Web pages has changed over time.
- Amazon. [n. d.]. Amazon Web Services (AWS) Open Data Sponsorship Program. Web document. https://aws.amazon.com/opendata/open-data-sponsorship-program/ Retrieved: 22 November 2023.
- Apache Software Foundation. [n. d.]. Content analysis toolkit. Web document. https://tika.apache.org/ Retrieved 7 December 2023.
- Internet Archive. [n. d.]. Sort-friendly URI Reordering Transform. Web document. http://crawler.archive.org/articles/user_manual/glossary.html#surt Retrieved: 29 November 2022.
- Stefan Baack and Mozilla Insights. 2024. Training Data for the Price of a Sandwich. Web document. https://foundation.mozilla.org/en/research/library/generative-ai-training-data/common-crawl/ Retrieved: 25 February 2024.
- Bertil Chapuis et al. 2020. An Empirical Study of the Use of Integrity Verification Mechanisms for Web Subresources. In Proceedings of The Web Conference 2020 (Taipei, Taiwan) (WWW ’20). Association for Computing Machinery, New York, NY, USA, 34–45. https://doi.org/10.1145/3366423.3380092
- Jingrui Chen. 2021. A Survey on HTTP cookies: Do large Internet companies collect more information from users? MSc dissertation. University of Edinburgh.
- Categorising AWS Common Crawl Dataset using MapReduce. In 2019 Conference on Next Generation Computing Applications (NextComp). 1–6. https://doi.org/10.1109/NEXTCOMP.2019.8883665
- Common Crawl. 2022. Common Crawl - Get Started. Web document. https://commoncrawl.org/get-started Retrieved: 22 November 2023.
- Common Crawl. 2023. Common Crawl - Open Repository of Web Crawl Data. Web document. https://commoncrawl.org Retrieved: 22 November 2023.
- Common Crawl. 2024. Common Crawl - Terms of Use. Web document. https://commoncrawl.org/terms-of-use Retrieved: 24 February 2024.
- Nick Cox. 2011,2023. How to calculate a confidence interval for Spearman’s rank correlation. Web document. https://stats.stackexchange.com/a/18904 Retrieved: 6 December 2023.
- Peter Deutsch. 1996. GZIP file format specification version 4.3. Internet RFC. IETF. https://www.ietf.org/rfc/rfc1952.html Retrieved: 22 November 2023.
- Lukasz Domanski. 2020. Analysing Common Crawl - Efficient and Cost-Effective Processing of Large-Scale Data. MSc dissertation. University of Edinburgh.
- Yuheng Du et al. 2017. Representativeness of latent dirichlet allocation topics estimated from data samples with application to common crawl. In 2017 IEEE International Conference on Big Data (Big Data). 1418–1427. https://doi.org/10.1109/BigData.2017.8258075
- Julian Eberius et al. 2015. Building the Dresden Web Table Corpus: A Classification Approach. In 2015 IEEE/ACM 2nd International Symposium on Big Data Computing (BDC). 41–50. https://doi.org/10.1109/BDC.2015.30
- Roy T. Fielding and Julian Reschke. 2014. Hypertext Transfer Protocol (HTTP/1.1): Conditional Requests. RFC 7232. https://doi.org/10.17487/RFC7232 Retrieved: 22 November 2023.
- Florian Hantke and Ben Stock. 2022. HTML violations and where to find them: a longitudinal analysis of specification violations in HTML. In Proceedings of the 22nd ACM Internet Measurement Conference (Nice, France) (IMC ’22). Association for Computing Machinery, New York, NY, USA, 358–373. https://doi.org/10.1145/3517745.3561437
- IEEE. 2017. The Open Group Base Specifications Issue 7, 2018 edition IEEE Std 1003.1-2017: POSIX.1a. Web document. https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap04.html#tag_04_16 Retrieved: 25 November 2023.
- International Internet Preservation Consortium. 2017. Web document. https://iipc.github.io/warc-specifications/specifications/warc-format/warc-1.1/ Retrieved: 29 November 2022.
- Ilya Kreymer. 2015a. Web document. https://github.com/ikreymer/pywb/wiki/CDX-Index-Format#zipnum-sharded-cdx Retrieved: 22 November 2023.
- Ilya Kreymer. 2015b. Announcing the Common Crawl Index. Web document. https://commoncrawl.org/blog/announcing-the-common-crawl-index Retrieved: 22 November 2023.
- Alexandra Luccioni and Joseph Viviano. 2021. What’s in the Box? An Analysis of Undesirable Content in the Common Crawl Corpus. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), Chengqing Zong, Fei Xia, Wenjie Li, and Roberto Navigli (Eds.). Association for Computational Linguistics, Online, 182–189. https://doi.org/10.18653/v1/2021.acl-short.24
- Extensible Markup Language (XML) 1.0 (Fifth Edition). Fifth Edition of a W3C Recommendation. W3C. http://www.w3.org/TR/xml/ Retrieved: 29 November 2022.
- Sebastian Nagel. 2022. Common Crawl (Getting Started). Web document. https://commoncrawl.org/the-data/get-started/ Retrieved: 29 November 2022.
- Alexander Panchenko et al. 2018. Building a Web-Scale Dependency-Parsed Corpus from CommonCrawl. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Nicoletta Calzolari et al. (Eds.). European Language Resources Association (ELRA), Miyazaki, Japan.
- Statsmodels Project. [n. d.]. statistical models, hypothesis tests, and data exploration. Web document. https://www.statsmodels.org/stable/index.html Retrieved: 28 November 2022.
- Dick Sites. 2013. Compact Language Detector 2. Web document. https://github.com/CLD2Owners/cld2 Retrieved 7 December 2023.
- Henry S. Thompson and Jian Tong. 2018. Can Common Crawl Reliably Track Persistent Identifier (PID) Use Over Time. In Companion Proceedings of the The Web Conference 2018 (Lyon, France) (WWW ’18). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE, 1749–1755. https://doi.org/10.1145/3184558.3191636
- Pauli Virtanen et al. 2020. SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. Nature Methods 17 (2020), 261–272. https://doi.org/10.1038/s41592-019-0686-2