Grub (search engine)

Last updated
Grubby, the Official Grub Mascot Grub.svg
Grubby, the Official Grub Mascot

Grub was an open source distributed search crawler platform.

Contents

Users of Grub could download the peer-to-peer grubclient software and let it run during their computer's idle time. The client indexed the URLs and sent them back to the main grub server in a highly compressed form. The collective crawl could then, in theory, be utilized by an indexing system, such as the one being proposed at Wikia Search. Grub was able to quickly build a large snapshot by asking thousands of clients to crawl and analyze a small portion of the web each.

Wikia has now released the entire Grub package under an open source software license. However, the old Grub clients are not functional anymore. New clients can be found on the Wikia wiki.

History

The project was started in 2000 by Kord Campbell, Igor Stojanovski, and Ledio Ago in Oklahoma City. [1] Intellectual property rights were acquired from Grub in January 2003 for $1.3 million in cash and stock by LookSmart. [2] For a short time the original team continued working on the project, releasing several new versions of the software, albeit under a closed license.

Operations of Grub were shut down in late 2005. On July 27, 2007, Jimmy Wales announced that Wikia, Inc., the for-profit company developing the open source search engine Wikia Search, had acquired Grub from LookSmart [3] on July 17 for $50,000. [4]

Related Research Articles

<span class="mw-page-title-main">Web crawler</span> Software which systematically browses the World Wide Web

A Web crawler, sometimes called a spider or spiderbot and often shortened to crawler, is an Internet bot that systematically browses the World Wide Web and that is typically operated by search engines for the purpose of Web indexing.

Search engine optimization (SEO) is the process of improving the quality and quantity of website traffic to a website or a web page from search engines. SEO targets unpaid traffic rather than direct traffic or paid traffic. Unpaid traffic may originate from different kinds of searches, including image search, video search, academic search, news search, and industry-specific vertical search engines.

LookSmart is an American search advertising, content management, online media, and technology company. It provides search, machine learning and chatbot technologies as well as pay-per-click and contextual advertising services.

Distributed web crawling is a distributed computing technique whereby Internet search engines employ many computers to index the Internet via web crawling. Such systems may allow for users to voluntarily offer their own computing and bandwidth resources towards crawling web pages. By spreading the load of these tasks across many computers, costs that would otherwise be spent on maintaining large computing clusters are avoided.

BitTorrent, also referred to as simply torrent, is a communication protocol for peer-to-peer file sharing (P2P), which enables users to distribute data and electronic files over the Internet in a decentralized manner. The protocol is developed and maintained by Rainberry, Inc., and was first released in 2001.

<span class="mw-page-title-main">Apache Nutch</span> Open source web crawler

Apache Nutch is a highly extensible and scalable open source web crawler software project.

The deep web, invisible web, or hidden web are parts of the World Wide Web whose contents are not indexed by standard web search-engine programs. This is in contrast to the "surface web", which is accessible to anyone using the Internet. Computer scientist Michael K. Bergman is credited with inventing the term in 2001 as a search-indexing term.

BitTorrent is a proprietary adware BitTorrent client developed by Bram Cohen and Rainberry, Inc. used for uploading and downloading files via the BitTorrent protocol. BitTorrent was the first client written for the protocol. It is often nicknamed Mainline by developers denoting its official origins. Since version 6.0 the BitTorrent client has been a rebranded version of μTorrent. As a result, it is no longer open source. It is currently available for Microsoft Windows, Mac, Linux, iOS and Android. There are currently two versions of the software, "BitTorrent Classic" which inherits the historical version numbering, and "BitTorrent Web", which uses its own version numbering.

<span class="mw-page-title-main">YaCy</span>

YaCy is a free distributed search engine, built on the principles of peer-to-peer (P2P) networks created by Michael Christen in 2003. The engine is written in Java and distributed on several hundred computers, as of September 2006, so-called YaCy-peers. Each YaCy-peer independently crawls through the Internet, analyzes and indexes found web pages, and stores indexing results in a common database which is shared with other YaCy-peers using principles of peer-to-peer. It is a search engine that everyone can use to build a search portal for their intranet and to help search the public internet clearly.

Web scraping, web harvesting, or web data extraction is data scraping used for extracting data from websites. Web scraping software may directly access the World Wide Web using the Hypertext Transfer Protocol or a web browser. While web scraping can be done manually by a software user, the term typically refers to automated processes implemented using a bot or web crawler. It is a form of copying in which specific data is gathered and copied from the web, typically into a central local database or spreadsheet, for later retrieval or analysis.

<span class="mw-page-title-main">Search engine</span> Software system that is designed to search for information on the World Wide Web

A search engine is a software system that finds web pages that match a web search. They search the World Wide Web in a systematic way for particular information specified in a textual web search query. The search results are generally presented in a line of results, often referred to as search engine results pages (SERPs). The information may be a mix of hyperlinks to web pages, images, videos, infographics, articles, and other types of files. Some search engines also mine data available in databases or open directories. Unlike web directories and social bookmarking sites, which are maintained by human editors, search engines also maintain real-time information by running an algorithm on a web crawler. Any internet-based content that cannot be indexed and searched by a web search engine falls under the category of deep web. Modern search engines are based on techniques and methods developed in the field of Information retrieval.

<span class="mw-page-title-main">Wikia Search</span> Defunct free and open-source web search engine by Wikia

Wikia Search was a short-lived free and open-source web search engine launched by Wikia, a for-profit wiki-hosting company founded by Jimmy Wales and Angela Beesley. Wikia Search followed other experiments by Wikia into search engine technology and officially launched as a "public alpha" on January 7, 2008. The roll-out version of the search interface was widely criticized by reviewers in mainstream media.

A distributed search engine is a search engine where there is no central server. Unlike traditional centralized search engines, work such as crawling, data mining, indexing, and query processing is distributed among several peers in a decentralized manner where there is no single point of control.

<span class="mw-page-title-main">Fandom (website)</span> Wiki hosting service and domain

Fandom is a wiki hosting service that hosts wikis mainly on entertainment topics. The privately held, for-profit Delaware company was founded in October 2004 by Wikipedia co-founder Jimmy Wales and Angela Beesley. Fandom was acquired in 2018 by TPG Inc. and Jon Miller through Integrated Media Co.

Seeks is a free and open-source project licensed under the GNU Affero General Public License version 3 (AGPL-3.0-or-later). It exists to create an alternative to the current market-leading search engines, driven by user concerns rather than corporate interests. The original manifesto was created by Emmanuel Benazera and Sylvio Drouin and published in October 2006. The project was under active development until April 2014, with both stable releases of the engine and revisions of the source code available for public use. In September 2011, Seeks won an innovation award at the Open World Forum Innovation Awards. The Seeks source code has not been updated since April 28, 2014 and no Seeks nodes have been usable since February 6, 2016.

<span class="mw-page-title-main">Elasticsearch</span> Search engine

Elasticsearch is a search engine based on the Lucene library. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents. Elasticsearch is developed in Java and is dual-licensed under the source-available Server Side Public License and the Elastic license, while other parts fall under the proprietary (source-available) Elastic License. Official clients are available in Java, .NET (C#), PHP, Python, Ruby and many other languages. According to the DB-Engines ranking, Elasticsearch is the most popular enterprise search engine.

The following outline is provided as an overview of and topical guide to search engines.

<span class="mw-page-title-main">Gigablast</span> Free and open-source web search engine

Gigablast was an American free and open-source web search engine and directory. Founded in 2000, it was an independent engine and web crawler, developed and maintained by Matt Wells, a former Infoseek employee and New Mexico Tech graduate. During early April 2023, the website went offline without warning and without any official statement.

<span class="mw-page-title-main">Hierarchical Cluster Engine Project</span>

Hierarchical Cluster Engine (HCE) is a FOSS complex solution for: construct custom network mesh or distributed network cluster structure with several relations types between nodes, formalize the data flow processing goes from upper node level central source point to down nodes and backward, formalize the management requests handling from multiple source points, support native reducing of multiple nodes results, internally support powerful full-text search engine and data storage, provide transactions-less and transactional requests processing, support flexible run-time changes of cluster infrastructure, have many languages bindings for client-side integration APIs in one product build on C++ language.

Norconex Web Crawler is a free and open-source web crawling and web scraping Software written in Java and released under an Apache License. It can export data to many repositories such as Apache Solr, Elasticsearch, Microsoft Azure Cognitive Search, Amazon CloudSearch and more.

References