Type of business | 501(c)(3) non-profit |
---|---|
Founded | 2007 |
Headquarters | San Francisco, California; Los Angeles, California, United States |
Founder(s) | Gil Elbaz |
Key people | Peter Norvig, Rich Skrenta, Eva Ho |
URL | commoncrawl |
Content license | Apache 2.0 (software) [ clarification needed ] |
Common Crawl is a nonprofit 501(c)(3) organization that crawls the web and freely provides its archives and datasets to the public. [1] [2] Common Crawl's web archive consists of petabytes of data collected since 2008. [3] It completes crawls generally every month. [4]
Common Crawl was founded by Gil Elbaz. [5] Advisors to the non-profit include Peter Norvig and Joi Ito. [6] The organization's crawlers respect nofollow and robots.txt policies. Open source code for processing Common Crawl's data set is publicly available.
The Common Crawl dataset includes copyrighted work and is distributed from the US under fair use claims. Researchers in other countries have made use of techniques such as shuffling sentences or referencing the common crawl dataset to work around copyright law in other legal jurisdictions. [7]
English is the primary language for 46% of documents in the March 2023 version of the Common Crawl dataset. The next most common primary languages are German, Russian, Japanese, French, Spanish and Chinese, each with less than 6% of documents. [8]
Amazon Web Services began hosting Common Crawl's archive through its Public Data Sets program in 2012. [9]
The organization began releasing metadata files and the text output of the crawlers alongside .arc files in July 2012. [10] Common Crawl's archives had only included .arc files previously. [10]
In December 2012, blekko donated to Common Crawl search engine metadata blekko had gathered from crawls it conducted from February to October 2012. [11] The donated data helped Common Crawl "improve its crawl while avoiding spam, porn and the influence of excessive SEO." [11]
In 2013, Common Crawl began using the Apache Software Foundation's Nutch webcrawler instead of a custom crawler. [12] Common Crawl switched from using .arc files to .warc files with its November 2013 crawl. [13]
A filtered version of Common Crawl was used to train OpenAI's GPT-3 language model, announced in 2020. [14]
The following data have been collected from the official Common Crawl Blog [15] and Common Crawl's API. [16]
Crawl date | Size in TiB | Billions of pages | Comments |
---|---|---|---|
April 2024 | 386 | 2.7 | Crawl conducted from April 12 to April 24, 2024 |
February/March 2024 | 425 | 3.16 | Crawl conducted from February 20 to March 5, 2024 |
December 2023 | 454 | 3.35 | Crawl conducted from November 28 to December 12, 2023 |
June 2023 | 390 | 3.1 | Crawl conducted from May 27 to June 11, 2023 |
April 2023 | 400 | 3.1 | Crawl conducted from March 20 to April 2, 2023 |
February 2023 | 400 | 3.15 | Crawl conducted from January 26 to February 9, 2023 |
December 2022 | 420 | 3.35 | Crawl conducted from November 26 to December 10, 2022 |
October 2022 | 380 | 3.15 | Crawl conducted in September and October 2022 |
April 2021 | 320 | 3.1 | |
November 2018 | 220 | 2.6 | |
October 2018 | 240 | 3.0 | |
September 2018 | 220 | 2.8 | |
August 2018 | 220 | 2.65 | |
July 2018 | 255 | 3.25 | |
June 2018 | 235 | 3.05 | |
May 2018 | 215 | 2.75 | |
April 2018 | 230 | 3.1 | |
March 2018 | 250 | 3.2 | |
February 2018 | 270 | 3.4 | |
January 2018 | 270 | 3.4 | |
December 2017 | 240 | 2.9 | |
November 2017 | 260 | 3.2 | |
October 2017 | 300 | 3.65 | |
September 2017 | 250 | 3.01 | |
August 2017 | 280 | 3.28 | |
July 2017 | 240 | 2.89 | |
June 2017 | 260 | 3.16 | |
May 2017 | 250 | 2.96 | |
April 2017 | 250 | 2.94 | |
March 2017 | 250 | 3.07 | |
February 2017 | 250 | 3.08 | |
January 2017 | 250 | 3.14 | |
December 2016 | — | 2.85 | |
October 2016 | — | 3.25 | |
September 2016 | — | 1.72 | |
August 2016 | — | 1.61 | |
July 2016 | — | 1.73 | |
June 2016 | — | 1.23 | |
May 2016 | — | 1.46 | |
April 2016 | — | 1.33 | |
February 2016 | — | 1.73 | |
November 2015 | 151 | 1.82 | |
September 2015 | 106 | 1.32 | |
August 2015 | 149 | 1.84 | |
July 2015 | 145 | 1.81 | |
June 2015 | 131 | 1.67 | |
May 2015 | 159 | 2.05 | |
April 2015 | 168 | 2.11 | |
March 2015 | 124 | 1.64 | |
February 2015 | 145 | 1.9 | |
January 2015 | 139 | 1.82 | |
December 2014 | 160 | 2.08 | |
November 2014 | 135 | 1.95 | |
October 2014 | 254 | 3.7 | |
September 2014 | 220 | 2.8 | |
August 2014 | 200 | 2.8 | |
July 2014 | 266 | 3.6 | |
April 2014 | 183 | 2.6 | |
March 2014 | 223 | 2.8 | First Nutch crawl |
Winter 2013 | 148 | 2.3 | Crawl conducted from December 4 through December 22, 2013 |
Summer 2013 | ? | ? | Crawl conducted from May 2013 through June 2013. First WARC crawl |
2012 | ? | ? | Crawl conducted from January 2012 through June 2012. Final ARC crawl |
2009-2010 | ? | ? | Crawl conducted from July 2009 through September 2010 |
2008-2009 | ? | ? | Crawl conducted from May 2008 through January 2009 |
In corroboration with SURFsara, Common Crawl sponsors the Norvig Web Data Science Award, a competition open to students and researchers in Benelux. [17] [18] The award is named for Peter Norvig who also chairs the judging committee for the award. [17]
Google's version of the Common Crawl is called the Colossal Clean Crawled Corpus, or C4 for short. It was constructed for the training of the T5 language model series in 2019. [19] There are some concern over copyrighted content in the C4. [20]
A Web crawler, sometimes called a spider or spiderbot and often shortened to crawler, is an Internet bot that systematically browses the World Wide Web and that is typically operated by search engines for the purpose of Web indexing.
robots.txt is the filename used for implementing the Robots Exclusion Protocol, a standard used by websites to indicate to visiting web crawlers and other web robots which portions of the website they are allowed to visit.
Search engine optimization (SEO) is the process of improving the quality and quantity of website traffic to a website or a web page from search engines. SEO targets unpaid traffic rather than direct traffic or paid traffic. Unpaid traffic may originate from different kinds of searches, including image search, video search, academic search, news search, and industry-specific vertical search engines.
Distributed web crawling is a distributed computing technique whereby Internet search engines employ many computers to index the Internet via web crawling. Such systems may allow for users to voluntarily offer their own computing and bandwidth resources towards crawling web pages. By spreading the load of these tasks across many computers, costs that would otherwise be spent on maintaining large computing clusters are avoided.
Googlebot is the web crawler software used by Google that collects documents from the web to build a searchable index for the Google Search engine. This name is actually used to refer to two different types of web crawlers: a desktop crawler and a mobile crawler.
Apache Nutch is a highly extensible and scalable open source web crawler software project.
In web archiving, an archive site is a website that stores information on webpages from the past for anyone to view.
Web scraping, web harvesting, or web data extraction is data scraping used for extracting data from websites. Web scraping software may directly access the World Wide Web using the Hypertext Transfer Protocol or a web browser. While web scraping can be done manually by a software user, the term typically refers to automated processes implemented using a bot or web crawler. It is a form of copying in which specific data is gathered and copied from the web, typically into a central local database or spreadsheet, for later retrieval or analysis.
Sitemaps is a protocol in XML format meant for a webmaster to inform search engines about URLs on a website that are available for web crawling. It allows webmasters to include additional information about each URL: when it was last updated, how often it changes, and how important it is in relation to other URLs of the site. This allows search engines to crawl the site more efficiently and to find URLs that may be isolated from the rest of the site's content. The Sitemaps protocol is a URL inclusion protocol and complements robots.txt
, a URL exclusion protocol.
Heritrix is a web crawler designed for web archiving. It was written by the Internet Archive. It is available under a free software license and written in Java. The main interface is accessible using a web browser, and there is a command-line tool that can optionally be used to initiate crawls.
Web archiving is the process of collecting, preserving and providing access to material from the World Wide Web. The aim is to ensure that information is preserved in an archival format for research and the public.
A focused crawler is a web crawler that collects Web pages that satisfy some specific property, by carefully prioritizing the crawl frontier and managing the hyperlink exploration process. Some predicates may be based on simple, deterministic and surface properties. For example, a crawler's mission may be to crawl pages from only the .jp domain. Other predicates may be softer or comparative, e.g., "crawl pages about baseball", or "crawl pages with large PageRank". An important page property pertains to topics, leading to 'topical crawlers'. For example, a topical crawler may be deployed to collect pages about solar power, swine flu, or even more abstract concepts like controversy while minimizing resources spent fetching pages on other topics. Crawl frontier management may not be the only device used by focused crawlers; they may use a Web directory, a Web text index, backlinks, or any other Web artifact.
Semantic HTML is the use of HTML markup to reinforce the semantics, or meaning, of the information in web pages and web applications rather than merely to define its presentation or look. Semantic HTML is processed by traditional web browsers as well as by many other user agents. CSS is used to suggest how it is presented to human users.
Metadata is "data that provides information about other data", but not the content of the data itself, such as the text of a message or the image itself. There are many distinct types of metadata, including:
The WARC archive format specifies a method for combining multiple digital resources into an aggregate archive file together with related information. These combined resources are saved as a WARC file which can be replayed on appropriate software, or utilized by archive websites such as the Wayback Machine.
Scrapy is a free and open-source web-crawling framework written in Python. Originally designed for web scraping, it can also be used to extract data using APIs or as a general-purpose web crawler. It is currently maintained by Zyte, a web-scraping development and services company.
The Internet Memory Foundation was a non-profit foundation whose purpose was archiving content of the World Wide Web. It hosted projects and research that included the preservation and protection of digital media content in various forms to form a digital library of cultural content. As of August 2018, it is defunct.
Apache Tika is a content detection and analysis framework, written in Java, stewarded at the Apache Software Foundation. It detects and extracts metadata and text from over a thousand different file types, and as well as providing a Java library, has server and command-line editions suitable for use from other programming languages.
In natural language processing, linguistics, and neighboring fields, Linguistic Linked Open Data (LLOD) describes a method and an interdisciplinary community concerned with creating, sharing, and (re-)using language resources in accordance with Linked Data principles. The Linguistic Linked Open Data Cloud was conceived and is being maintained by the Open Linguistics Working Group (OWLG) of the Open Knowledge Foundation, but has been a point of focal activity for several W3C community groups, research projects, and infrastructure efforts since then.
StormCrawler is an open-source collection of resources for building low-latency, scalable web crawlers on Apache Storm. It is provided under Apache License and is written mostly in Java.
the majority of our data is derived from raw Common Crawl with only quality-based filtering.