Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards a Cleaner Document-Oriented Multilingual Crawled Corpus (2201.06642v1)

Published 17 Jan 2022 in cs.CL

Abstract: The need for raw large raw corpora has dramatically increased in recent years with the introduction of transfer learning and semi-supervised learning methods to Natural Language Processing. And while there have been some recent attempts to manually curate the amount of data necessary to train LLMs, the main way to obtain this data is still through automatic web crawling. In this paper we take the existing multilingual web corpus OSCAR and its pipeline Ungoliant that extracts and classifies data from Common Crawl at the line level, and propose a set of improvements and automatic annotations in order to produce a new document-oriented version of OSCAR that could prove more suitable to pre-train large generative LLMs as well as hopefully other applications in Natural Language Processing and Digital Humanities.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Julien Abadji (2 papers)
  2. Pedro Ortiz Suarez (15 papers)
  3. Laurent Romary (45 papers)
  4. BenoƮt Sagot (60 papers)
Citations (139)

Summary

We haven't generated a summary for this paper yet.