Common Crawl

Last updated
Common Crawl
Type of business 501(c)(3) non-profit
Headquarters San Francisco, California; Los Angeles, California, United States
Founder(s) Gil Elbaz
Key people Peter Norvig, Nova Spivack, Carl Malamud, Kurt Bollacker, Joi Ito
URL commoncrawl.org

Common Crawl is a nonprofit 501(c)(3) organization that crawls the web and freely provides its archives and datasets to the public. [1] [2] Common Crawl's web archive consists of petabytes of data collected since 2008. [3] It completes crawls generally every month. [4]

Contents

Common Crawl was founded by Gil Elbaz. [5] Advisors to the non-profit include Peter Norvig and Joi Ito. [6] The organization's crawlers respect nofollow and robots.txt policies. Open source code for processing Common Crawl's data set is publicly available.

The Common Crawl dataset includes copyrighted work and is distributed from the US under fair use claims. Researchers in other countries have made use of techniques such as shuffling sentences or referencing the common crawl dataset to work around copyright law in other legal jurisdictions. [7]

As of March 2023, in the most recent version of the Common Crawl dataset, 46% of documents had English as their primary language (followed by German, Russian, Japanese, French, Spanish and Chinese, all below 6%). [8]

History

Amazon Web Services began hosting Common Crawl's archive through its Public Data Sets program in 2012. [9]

The organization began releasing metadata files and the text output of the crawlers alongside .arc files in July of that year. [10] Common Crawl's archives had only included .arc files previously. [10]

In December 2012, blekko donated to Common Crawl search engine metadata blekko had gathered from crawls it conducted from February to October 2012. [11] The donated data helped Common Crawl "improve its crawl while avoiding spam, porn and the influence of excessive SEO." [11]

In 2013, Common Crawl began using the Apache Software Foundation's Nutch webcrawler instead of a custom crawler. [12] Common Crawl switched from using .arc files to .warc files with its November 2013 crawl. [13]

A filtered version of Common Crawl was used to train OpenAI's GPT-3 language model, announced in 2020. [14]

Timeline of Common Crawl data

The following data have been collected from the official Common Crawl Blog. [15]

Crawl dateSize in TiBBillions of pagesComments
December 20234543.35Crawl conducted from November 28 to December 12, 2023
June 20233903.1Crawl conducted from May 27 to June 11, 2023
April 20234003.1Crawl conducted from March 20 to April 2, 2023
February 20234003.15Crawl conducted from January 26 to February 9, 2023
December 20224203.35Crawl conducted from November 26 to December 10, 2022
October 20223803.15Crawl conducted in September and October 2022
April 20213203.1
November 20182202.6
October 20182403.0
September 20182202.8
August 20182202.65
July 20182553.25
June 20182353.05
May 20182152.75
April 20182303.1
March 20182503.2
February 20182703.4
January 20182703.4
December 20172402.9
November 20172603.2
October 20173003.65
September 20172503.01
August 20172803.28
July 20172402.89
June 20172603.16
May 20172502.96
April 20172502.94
March 20172503.07
February 20172503.08
January 20172503.14
December 20162.85
October 20163.25
September 20161.72
August 20161.61
July 20161.73
June 20161.23
May 20161.46
April 20161.33
February 20161.73
November 20151511.82
September 20151061.32
August 20151491.84
July 20151451.81
June 20151311.67
May 20151592.05
April 20151682.11
March 20151241.64
February 20151451.9
January 20151391.82
December 20141602.08
November 20141351.95
October 20142543.7
September 20142202.8
August 20142002.8
July 20142663.6
April 20141832.6
March 20142232.8First Nutch crawl
January 20141482.3Crawls performed monthly
November 20131022Data in Warc file format
July 2012Data in Arc file format
January 2012Public Data Set of Amazon Web Services
November 2011405First availability on Amazon

Norvig Web Data Science Award

In corroboration with SURFsara, Common Crawl sponsors the Norvig Web Data Science Award, a competition open to students and researchers in Benelux. [16] [17] The award is named for Peter Norvig who also chairs the judging committee for the award. [16]

Google Colossal Clean Crawled Corpus

Google's version of the Common Crawl is called the Colossal Clean Crawled Corpus, or C4 for short. [18] [19]

Related Research Articles

<span class="mw-page-title-main">Web crawler</span> Software which systematically browses the World Wide Web

A Web crawler, sometimes called a spider or spiderbot and often shortened to crawler, is an Internet bot that systematically browses the World Wide Web and that is typically operated by search engines for the purpose of Web indexing.

robots.txt Standard used to advise web crawlers and scrapers not to index a web page or site

robots.txt is the filename used for implementing the Robots Exclusion Protocol, a standard used by websites to indicate to visiting web crawlers and other web robots which portions of the website they are allowed to visit.

Search engine optimization (SEO) is the process of improving the quality and quantity of website traffic to a website or a web page from search engines. SEO targets unpaid traffic rather than direct traffic or paid traffic. Unpaid traffic may originate from different kinds of searches, including image search, video search, academic search, news search, and industry-specific vertical search engines.

Distributed web crawling is a distributed computing technique whereby Internet search engines employ many computers to index the Internet via web crawling. Such systems may allow for users to voluntarily offer their own computing and bandwidth resources towards crawling web pages. By spreading the load of these tasks across many computers, costs that would otherwise be spent on maintaining large computing clusters are avoided.

<span class="mw-page-title-main">Googlebot</span> Web crawler used by Google

Googlebot is the web crawler software used by Google that collects documents from the web to build a searchable index for the Google Search engine. Googlebot was created to function concurrently on thousands of machines in order to enhance its performance and adapt to the expanding size of the internet. This name is actually used to refer to two different types of web crawlers: a desktop crawler and a mobile crawler.

<span class="mw-page-title-main">Apache Nutch</span> Open source web crawler

Apache Nutch is a highly extensible and scalable open source web crawler software project.

<span class="mw-page-title-main">Dungeon crawl</span> Type of scenario in fantasy role-playing games

A dungeon crawl is a type of scenario in fantasy role-playing games (RPGs) in which heroes navigate a labyrinth environment, battling various monsters, avoiding traps, solving puzzles, and looting any treasure they may find. Video games and board games which predominantly feature dungeon crawl elements are considered to be a genre.

In web archiving, an archive site is a website that stores information on webpages from the past for anyone to view.

Web scraping, web harvesting, or web data extraction is data scraping used for extracting data from websites. Web scraping software may directly access the World Wide Web using the Hypertext Transfer Protocol or a web browser. While web scraping can be done manually by a software user, the term typically refers to automated processes implemented using a bot or web crawler. It is a form of copying in which specific data is gathered and copied from the web, typically into a central local database or spreadsheet, for later retrieval or analysis.

Sitemaps is a protocol in XML format meant for a webmaster to inform search engines about URLs on a website that are available for web crawling. It allows webmasters to include additional information about each URL: when it was last updated, how often it changes, and how important it is in relation to other URLs of the site. This allows search engines to crawl the site more efficiently and to find URLs that may be isolated from the rest of the site's content. The Sitemaps protocol is a URL inclusion protocol and complements robots.txt, a URL exclusion protocol.

<span class="mw-page-title-main">Search engine</span> Software system for finding relevant information on the Web

A search engine is a software system that provides hyperlinks to web pages and other relevant information on the Web in response to a user's query. The user inputs a query within a web browser or a mobile app, and the search results are often a list of hyperlinks, accompanied by textual summaries and images. Users also have the option of limiting the search to a specific type of results, such as images, videos, or news.

<span class="mw-page-title-main">Heritrix</span> Web crawler designed for web archiving

Heritrix is a web crawler designed for web archiving. It was written by the Internet Archive. It is available under a free software license and written in Java. The main interface is accessible using a web browser, and there is a command-line tool that can optionally be used to initiate crawls.

Web archiving is the process of collecting portions of the World Wide Web to ensure the information is preserved in an archive for future researchers, historians, and the public. Web archivists typically employ web crawlers for automated capture due to the massive size and amount of information on the Web. The largest web archiving organization based on a bulk crawling approach is the Wayback Machine, which strives to maintain an archive of the entire Web.

A focused crawler is a web crawler that collects Web pages that satisfy some specific property, by carefully prioritizing the crawl frontier and managing the hyperlink exploration process. Some predicates may be based on simple, deterministic and surface properties. For example, a crawler's mission may be to crawl pages from only the .jp domain. Other predicates may be softer or comparative, e.g., "crawl pages about baseball", or "crawl pages with large PageRank". An important page property pertains to topics, leading to 'topical crawlers'. For example, a topical crawler may be deployed to collect pages about solar power, swine flu, or even more abstract concepts like controversy while minimizing resources spent fetching pages on other topics. Crawl frontier management may not be the only device used by focused crawlers; they may use a Web directory, a Web text index, backlinks, or any other Web artifact.

<span class="mw-page-title-main">Wayback Machine</span> Digital archive founded by the Internet Archive

The Wayback Machine is a digital archive of the World Wide Web founded by the Internet Archive, an American nonprofit organization based in San Francisco, California. Created in 1996 and launched to the public in 2001, it allows the user to go "back in time" to see how websites looked in the past. Its founders, Brewster Kahle and Bruce Gilliat, developed the Wayback Machine to provide "universal access to all knowledge" by preserving archived copies of defunct web pages.

The WARC archive format specifies a method for combining multiple digital resources into an aggregate archive file together with related information. The WARC format is a revision of the Internet Archive's ARC_IA File Format that has traditionally been used to store "web crawls" as sequences of content blocks harvested from the World Wide Web. The WARC format generalizes the older format to better support the harvesting, access, and exchange needs of archiving organizations. Besides the primary content currently recorded, the revision accommodates related secondary content, such as assigned metadata, abbreviated duplicate detection events, and later-date transformations. The WARC format is inspired by HTTP/1.0 streams, with a similar header and the use of CRLFs as delimiters, making it very conducive to crawler implementations.

blekko Web search engine

Blekko, trademarked as blekko (lowercase), was a company that provided a web search engine with the stated goal of providing better search results than those offered by Google Search, with results gathered from a set of 3 billion trusted webpages and excluding such sites as content farms. The company's site, launched to the public on November 1, 2010, used slashtags to provide results for common searches. Blekko also offered a downloadable search bar. It was acquired by IBM in March 2015, and the service was discontinued.

<span class="mw-page-title-main">Internet Memory Foundation</span> Web archiving organisation

The Internet Memory Foundation was a non-profitable foundation whose purpose was archiving content of the World Wide Web. It supported projects and research that included the preservation and protection of digital media content in various forms to form a digital library of cultural content. As of August 2018, it is defunct.

StormCrawler is an open-source collection of resources for building low-latency, scalable web crawlers on Apache Storm. It is provided under Apache License and is written mostly in Java.

Norconex Web Crawler is a free and open-source web crawling and web scraping Software written in Java and released under an Apache License. It can export data to many repositories such as Apache Solr, Elasticsearch, Microsoft Azure Cognitive Search, Amazon CloudSearch and more.

References

  1. Rosanna Xia (February 5, 2012). "Tech entrepreneur Gil Elbaz made it big in L.A." Los Angeles Times. Retrieved July 31, 2014.
  2. "Gil Elbaz and Common Crawl". NBC News. April 4, 2013. Retrieved July 31, 2014.
  3. "So you're ready to get started". Common Crawl. Retrieved 9 June 2023.
  4. Lisa Green (January 8, 2014). "Winter 2013 Crawl Data Now Available" . Retrieved June 2, 2018.
  5. "Startups - Gil Elbaz and Nova Spivack of Common Crawl - TWiST #222". This Week In Startups. January 10, 2012.
  6. Tom Simonite (January 23, 2013). "A Free Database of the Entire Web May Spawn the Next Google". MIT Technology Review. Archived from the original on June 26, 2014. Retrieved July 31, 2014.
  7. Schäfer, Roland (May 2016). "CommonCOW: Massively Huge Web Corpora from CommonCrawl Data and a Method to Distribute them Freely under Restrictive EU Copyright Laws". Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16). Portorož, Slovenia: European Language Resources Association (ELRA): 4501.
  8. "Statistics of Common Crawl Monthly Archives by commoncrawl". commoncrawl.github.io. Retrieved 2023-04-02.
  9. Jennifer Zaino (March 13, 2012). "Common Crawl to Add New Data in Amazon Web Services Bucket". Semantic Web. Archived from the original on July 1, 2014. Retrieved July 31, 2014.
  10. 1 2 Jennifer Zaino (July 16, 2012). "Common Crawl Corpus Update Makes Web Crawl Data More Efficient, Approachable for Users to Explore". Semantic Web. Archived from the original on August 12, 2014. Retrieved July 31, 2014.
  11. 1 2 Jennifer Zaino (December 18, 2012). "Blekko Data Donation Is s Big Benefit to Common Crawl". Semantic Web. Archived from the original on August 12, 2014. Retrieved July 31, 2014.
  12. Jordan Mendelson (February 20, 2014). "Common Crawl's Move to Nutch". Common Crawl. Retrieved July 31, 2014.
  13. Jordan Mendelson (November 27, 2013). "New Crawl Data Available!". Common Crawl. Retrieved July 31, 2014.
  14. Brown, Tom; Mann, Benjamin; Ryder, Nick; Subbiah, Melanie; Kaplan, Jared; Dhariwal, Prafulla; Neelakantan, Arvind; Shyam, Pranav; Sastry, Girish; Askell, Amanda; Agarwal, Sandhini (2020-06-01). "Language Models Are Few-Shot Learners". p. 14. arXiv: 2005.14165 [cs.CL]. the majority of our data is derived from raw Common Crawl with only quality-based filtering.
  15. "Blog – Common Crawl".
  16. 1 2 Lisa Green (November 15, 2012). "The Norvig Web Data Science Award". Common Crawl. Retrieved July 31, 2014.
  17. "Norvig Web Data Science Award 2014". Dutch Techcentre for Life Sciences. Archived from the original on August 15, 2014. Retrieved July 31, 2014.
  18. "Google achieves state-of-the-art NLP performance with an enormous language model and data set". VentureBeat. 2019-10-24. Retrieved 2023-04-21.
  19. Hern, Alex; editor, Alex Hern UK technology (2023-04-20). "Fresh concerns raised over sources of training material for AI systems". The Guardian. ISSN   0261-3077 . Retrieved 2023-04-21.{{cite news}}: |last2= has generic name (help)