Common Crawl

Last updated
Common Crawl
Common Crawl logo.svg
Type of business 501(c)(3) non-profit
Founded2007
Headquarters San Francisco, California; Los Angeles, California, United States
Founder(s) Gil Elbaz
Key people Peter Norvig, Rich Skrenta, Eva Ho
URL commoncrawl.org
Content license
Apache 2.0 (software) [ clarification needed ]

Common Crawl is a nonprofit 501(c)(3) organization that crawls the web and freely provides its archives and datasets to the public. [1] [2] Common Crawl's web archive consists of petabytes of data collected since 2008. [3] It completes crawls generally every month. [4]

Contents

Common Crawl was founded by Gil Elbaz. [5] Advisors to the non-profit include Peter Norvig and Joi Ito. [6] The organization's crawlers respect nofollow and robots.txt policies. Open source code for processing Common Crawl's data set is publicly available.

The Common Crawl dataset includes copyrighted work and is distributed from the US under fair use claims. Researchers in other countries have made use of techniques such as shuffling sentences or referencing the common crawl dataset to work around copyright law in other legal jurisdictions. [7]

English is the primary language for 46% of documents in the March 2023 version of the Common Crawl dataset. The next most common primary languages are German, Russian, Japanese, French, Spanish and Chinese, each with less than 6% of documents. [8]

History

Amazon Web Services began hosting Common Crawl's archive through its Public Data Sets program in 2012. [9]

The organization began releasing metadata files and the text output of the crawlers alongside .arc files in July 2012. [10] Common Crawl's archives had only included .arc files previously. [10]

In December 2012, blekko donated to Common Crawl search engine metadata blekko had gathered from crawls it conducted from February to October 2012. [11] The donated data helped Common Crawl "improve its crawl while avoiding spam, porn and the influence of excessive SEO." [11]

In 2013, Common Crawl began using the Apache Software Foundation's Nutch webcrawler instead of a custom crawler. [12] Common Crawl switched from using .arc files to .warc files with its November 2013 crawl. [13]

A filtered version of Common Crawl was used to train OpenAI's GPT-3 language model, announced in 2020. [14]

Timeline of Common Crawl data

The following data have been collected from the official Common Crawl Blog [15] and Common Crawl's API. [16]

Crawl dateSize in TiB Billions of pagesComments
April 20243862.7Crawl conducted from April 12 to April 24, 2024
February/March 20244253.16Crawl conducted from February 20 to March 5, 2024
December 20234543.35Crawl conducted from November 28 to December 12, 2023
June 20233903.1Crawl conducted from May 27 to June 11, 2023
April 20234003.1Crawl conducted from March 20 to April 2, 2023
February 20234003.15Crawl conducted from January 26 to February 9, 2023
December 20224203.35Crawl conducted from November 26 to December 10, 2022
October 20223803.15Crawl conducted in September and October 2022
April 20213203.1
November 20182202.6
October 20182403.0
September 20182202.8
August 20182202.65
July 20182553.25
June 20182353.05
May 20182152.75
April 20182303.1
March 20182503.2
February 20182703.4
January 20182703.4
December 20172402.9
November 20172603.2
October 20173003.65
September 20172503.01
August 20172803.28
July 20172402.89
June 20172603.16
May 20172502.96
April 20172502.94
March 20172503.07
February 20172503.08
January 20172503.14
December 20162.85
October 20163.25
September 20161.72
August 20161.61
July 20161.73
June 20161.23
May 20161.46
April 20161.33
February 20161.73
November 20151511.82
September 20151061.32
August 20151491.84
July 20151451.81
June 20151311.67
May 20151592.05
April 20151682.11
March 20151241.64
February 20151451.9
January 20151391.82
December 20141602.08
November 20141351.95
October 20142543.7
September 20142202.8
August 20142002.8
July 20142663.6
April 20141832.6
March 20142232.8First Nutch crawl
Winter 20131482.3Crawl conducted from December 4 through December 22, 2013
Summer 2013 ? ?Crawl conducted from May 2013 through June 2013. First WARC crawl
2012 ? ?Crawl conducted from January 2012 through June 2012. Final ARC crawl
2009-2010 ? ?Crawl conducted from July 2009 through September 2010
2008-2009 ? ?Crawl conducted from May 2008 through January 2009

Norvig Web Data Science Award

In corroboration with SURFsara, Common Crawl sponsors the Norvig Web Data Science Award, a competition open to students and researchers in Benelux. [17] [18] The award is named for Peter Norvig who also chairs the judging committee for the award. [17]

Colossal Clean Crawled Corpus

Google's version of the Common Crawl is called the Colossal Clean Crawled Corpus, or C4 for short. It was constructed for the training of the T5 language model series in 2019. [19] There are some concern over copyrighted content in the C4. [20]

Related Research Articles

<span class="mw-page-title-main">Web crawler</span> Software which systematically browses the World Wide Web

A Web crawler, sometimes called a spider or spiderbot and often shortened to crawler, is an Internet bot that systematically browses the World Wide Web and that is typically operated by search engines for the purpose of Web indexing.

robots.txt Internet protocol

robots.txt is the filename used for implementing the Robots Exclusion Protocol, a standard used by websites to indicate to visiting web crawlers and other web robots which portions of the website they are allowed to visit.

Search engine optimization (SEO) is the process of improving the quality and quantity of website traffic to a website or a web page from search engines. SEO targets unpaid traffic rather than direct traffic or paid traffic. Unpaid traffic may originate from different kinds of searches, including image search, video search, academic search, news search, and industry-specific vertical search engines.

Distributed web crawling is a distributed computing technique whereby Internet search engines employ many computers to index the Internet via web crawling. Such systems may allow for users to voluntarily offer their own computing and bandwidth resources towards crawling web pages. By spreading the load of these tasks across many computers, costs that would otherwise be spent on maintaining large computing clusters are avoided.

<span class="mw-page-title-main">Googlebot</span> Web crawler used by Google

Googlebot is the web crawler software used by Google that collects documents from the web to build a searchable index for the Google Search engine. This name is actually used to refer to two different types of web crawlers: a desktop crawler and a mobile crawler.

<span class="mw-page-title-main">Apache Nutch</span> Open source web crawler

Apache Nutch is a highly extensible and scalable open source web crawler software project.

In web archiving, an archive site is a website that stores information on webpages from the past for anyone to view.

Web scraping, web harvesting, or web data extraction is data scraping used for extracting data from websites. Web scraping software may directly access the World Wide Web using the Hypertext Transfer Protocol or a web browser. While web scraping can be done manually by a software user, the term typically refers to automated processes implemented using a bot or web crawler. It is a form of copying in which specific data is gathered and copied from the web, typically into a central local database or spreadsheet, for later retrieval or analysis.

Sitemaps is a protocol in XML format meant for a webmaster to inform search engines about URLs on a website that are available for web crawling. It allows webmasters to include additional information about each URL: when it was last updated, how often it changes, and how important it is in relation to other URLs of the site. This allows search engines to crawl the site more efficiently and to find URLs that may be isolated from the rest of the site's content. The Sitemaps protocol is a URL inclusion protocol and complements robots.txt, a URL exclusion protocol.

<span class="mw-page-title-main">Heritrix</span> Web crawler designed for web archiving

Heritrix is a web crawler designed for web archiving. It was written by the Internet Archive. It is available under a free software license and written in Java. The main interface is accessible using a web browser, and there is a command-line tool that can optionally be used to initiate crawls.

Web archiving is the process of collecting, preserving and providing access to material from the World Wide Web. The aim is to ensure that information is preserved in an archival format for research and the public.

A focused crawler is a web crawler that collects Web pages that satisfy some specific property, by carefully prioritizing the crawl frontier and managing the hyperlink exploration process. Some predicates may be based on simple, deterministic and surface properties. For example, a crawler's mission may be to crawl pages from only the .jp domain. Other predicates may be softer or comparative, e.g., "crawl pages about baseball", or "crawl pages with large PageRank". An important page property pertains to topics, leading to 'topical crawlers'. For example, a topical crawler may be deployed to collect pages about solar power, swine flu, or even more abstract concepts like controversy while minimizing resources spent fetching pages on other topics. Crawl frontier management may not be the only device used by focused crawlers; they may use a Web directory, a Web text index, backlinks, or any other Web artifact.

<span class="mw-page-title-main">Semantic HTML</span> HTML used to reinforce meaning of documents or webpages

Semantic HTML is the use of HTML markup to reinforce the semantics, or meaning, of the information in web pages and web applications rather than merely to define its presentation or look. Semantic HTML is processed by traditional web browsers as well as by many other user agents. CSS is used to suggest how it is presented to human users.

<span class="mw-page-title-main">Metadata</span> Data

Metadata is "data that provides information about other data", but not the content of the data itself, such as the text of a message or the image itself. There are many distinct types of metadata, including:

The WARC archive format specifies a method for combining multiple digital resources into an aggregate archive file together with related information. These combined resources are saved as a WARC file which can be replayed on appropriate software, or utilized by archive websites such as the Wayback Machine.

Scrapy is a free and open-source web-crawling framework written in Python. Originally designed for web scraping, it can also be used to extract data using APIs or as a general-purpose web crawler. It is currently maintained by Zyte, a web-scraping development and services company.

<span class="mw-page-title-main">Internet Memory Foundation</span> Web archiving organisation

The Internet Memory Foundation was a non-profit foundation whose purpose was archiving content of the World Wide Web. It hosted projects and research that included the preservation and protection of digital media content in various forms to form a digital library of cultural content. As of August 2018, it is defunct.

<span class="mw-page-title-main">Apache Tika</span> Open-source content analysis framework

Apache Tika is a content detection and analysis framework, written in Java, stewarded at the Apache Software Foundation. It detects and extracts metadata and text from over a thousand different file types, and as well as providing a Java library, has server and command-line editions suitable for use from other programming languages.

In natural language processing, linguistics, and neighboring fields, Linguistic Linked Open Data (LLOD) describes a method and an interdisciplinary community concerned with creating, sharing, and (re-)using language resources in accordance with Linked Data principles. The Linguistic Linked Open Data Cloud was conceived and is being maintained by the Open Linguistics Working Group (OWLG) of the Open Knowledge Foundation, but has been a point of focal activity for several W3C community groups, research projects, and infrastructure efforts since then.

StormCrawler is an open-source collection of resources for building low-latency, scalable web crawlers on Apache Storm. It is provided under Apache License and is written mostly in Java.

References

  1. Rosanna Xia (February 5, 2012). "Tech entrepreneur Gil Elbaz made it big in L.A." Los Angeles Times. Retrieved July 31, 2014.
  2. "Gil Elbaz and Common Crawl". NBC News. April 4, 2013. Retrieved July 31, 2014.
  3. "So you're ready to get started". Common Crawl. Retrieved 9 June 2023.
  4. Lisa Green (January 8, 2014). "Winter 2013 Crawl Data Now Available" . Retrieved June 2, 2018.
  5. "Startups - Gil Elbaz and Nova Spivack of Common Crawl - TWiST #222". This Week In Startups. January 10, 2012.
  6. Tom Simonite (January 23, 2013). "A Free Database of the Entire Web May Spawn the Next Google". MIT Technology Review. Archived from the original on June 26, 2014. Retrieved July 31, 2014.
  7. Schäfer, Roland (May 2016). "CommonCOW: Massively Huge Web Corpora from CommonCrawl Data and a Method to Distribute them Freely under Restrictive EU Copyright Laws". Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16). Portorož, Slovenia: European Language Resources Association (ELRA): 4501.
  8. "Statistics of Common Crawl Monthly Archives by commoncrawl". commoncrawl.github.io. Retrieved 2023-04-02.
  9. Jennifer Zaino (March 13, 2012). "Common Crawl to Add New Data in Amazon Web Services Bucket". Semantic Web. Archived from the original on July 1, 2014. Retrieved July 31, 2014.
  10. 1 2 Jennifer Zaino (July 16, 2012). "Common Crawl Corpus Update Makes Web Crawl Data More Efficient, Approachable for Users to Explore". Semantic Web. Archived from the original on August 12, 2014. Retrieved July 31, 2014.
  11. 1 2 Jennifer Zaino (December 18, 2012). "Blekko Data Donation Is s Big Benefit to Common Crawl". Semantic Web. Archived from the original on August 12, 2014. Retrieved July 31, 2014.
  12. Jordan Mendelson (February 20, 2014). "Common Crawl's Move to Nutch". Common Crawl. Retrieved July 31, 2014.
  13. Jordan Mendelson (November 27, 2013). "New Crawl Data Available!". Common Crawl. Retrieved July 31, 2014.
  14. Brown, Tom; Mann, Benjamin; Ryder, Nick; Subbiah, Melanie; Kaplan, Jared; Dhariwal, Prafulla; Neelakantan, Arvind; Shyam, Pranav; Sastry, Girish; Askell, Amanda; Agarwal, Sandhini (2020-06-01). "Language Models Are Few-Shot Learners". p. 14. arXiv: 2005.14165 [cs.CL]. the majority of our data is derived from raw Common Crawl with only quality-based filtering.
  15. "Blog – Common Crawl".
  16. "Collection info - Common Crawl".
  17. 1 2 Lisa Green (November 15, 2012). "The Norvig Web Data Science Award". Common Crawl. Retrieved July 31, 2014.
  18. "Norvig Web Data Science Award 2014". Dutch Techcentre for Life Sciences. Archived from the original on August 15, 2014. Retrieved July 31, 2014.
  19. Raffel, Colin; Shazeer, Noam; Roberts, Adam; Lee, Katherine; Narang, Sharan; Matena, Michael; Zhou, Yanqi; Li, Wei; Liu, Peter J. (2020). "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer". Journal of Machine Learning Research. 21 (140): 1–67. arXiv: 1910.10683 . ISSN   1533-7928.
  20. Hern, Alex (2023-04-20). "Fresh concerns raised over sources of training material for AI systems". The Guardian. ISSN   0261-3077 . Retrieved 2023-04-21.