Forestle was an ecologically inspired search engine created by Christian Kroll in Wittenberg, Germany, in 2008 and discontinued in 2011. Forestle supported rainforest conservation through donations of ad revenue and aimed to reduce CO2 emissions. It was similar to the search engine Ecosia, which plants new trees with its ad revenue. Forestle was briefly associated with Google before associating with Yahoo.
Forestle saved 0.1 square meters (about 0.12 square yards) of rain forest per search event. It guaranteed to donate 90% of its advertising revenue to the Adopt an Acre program of is partner organization The Nature Conservancy . The Nature Conservancy used these donations by Forestle to sustain rainforests. As of December 9, 2009, about 2,910,000 square meters of rain forest have been saved. [1] By November 20, 2010, about 9,250,000 square meters had been saved.
A Forestle search was also essentially CO2-neutral, as Forestle.org offset the carbon-dioxide emissions caused by electricity consumed by all Forestle servers, the network infrastructure as well as the computers of each user by purchasing an equivalent amount of renewable energy certificates. [2] The certificates were purchased from a part of the 10% of revenue left after conserving rain forest. This made Forestle one of the few web search sites that are green certified.
The number of search requests on Forestle.org continued to increase significantly: [3] Within two months, it increased more than sixfold from about 4,000 per day on average in December 2008 to more than 24,000 per day in February 2009. The report about Forestle in a major German newspaper [4] end of February 2009 transiently boosted the number of search events on Forestle.org within a week (3 March 2009) close to its all-time maximum. [5] As of December 2009, the number of search events exceeds 200,000 per day.
The degree of impact of Forestle.org and similar kinds of 'green' search engines is discussed; the (now removed) note on Forestle to not click on advertisements to 'help' achieving larger advertisement revenues was particularly criticized. [6]
The site pioneered a thumbnail website preview for all search results. Moreover, it offered a search with so-called indicators, for instance, one could directly search for 'Basic Income' on Wikipedia (instead of the entire WWW) by typing 'Wikipedia::Basic Income'. [7] The language chosen for indicator search is automatically associated, so a search on the US web site http://us.Forestle.org or on the British web site http://uk.Forestle.org leads to a search on English Wikipedia http://en.wikipedia.org and a search on the German web site http://de.Forestle.org (or on the Austrian Website http://at.Forestle.org) leads a search on German Wikipedia http://de.wikipedia.org. Forestle also provided several browser plugins, could be added to iGoogle and was available in English and German (full versions) as well as in Spanish and Dutch (details partially in English)..
On November 27, 2009, Forestle received the Utopia Award [8] as an exemplary organisation enabling us to live more sustainably. The Jury emphasizes that Forestle "offers a simple and strong possibility to contribute to protect existing rain forest through the use of an everyday [...] service" and that "thereby Forestle unfolds a high effectiveness and sharpens the consumers' sense for the impact of consumer behavior". [9]
Forestle was associated to Google until Google revoked the site's search functionality after four days due to a dispute over whether their terms of service were being broken. Forestle.org stated that Google had not actually provided reasons for stopping the association. [10] At the time, Forestle posted a message on their website stating that Google had contacted them and explained the reason for banning Forestle from using their Google Custom Search. The action by Google to not further support Forestle immediately drew international attention. [11] [12] Details about the conflict between Google and Forestle are debated. [13] Forestle became associated with Yahoo later. [14]
Forestle was discontinued and redirected to the similar search engine Ecosia on January 1, 2011. [15]
robots.txt is the filename used for implementing the Robots Exclusion Protocol, a standard used by websites to indicate to visiting web crawlers and other web robots which portions of the website they are allowed to visit.
AlltheWeb was an Internet search engine that made its debut in mid-1999 and was closed in 2011. It grew out of FTP Search, Tor Egge's doctorate thesis at the Norwegian University of Science and Technology, which he started in 1994, which in turn resulted in the formation of Fast Search & Transfer (FAST), established on July 16, 1997.
Yahoo! Japan is a Japanese web portal. It was the most-visited website in Japan, nearing monopolistic status.
Fuck for Forest (FFF) is a non-profit environmental organisation founded in 2004 in Norway by Leona Johansson and Tommy Hol Ellingsen. It funds itself through a website of sexually explicit videos and photographs, charging a membership fee for access. A portion of funds are donated to the cause of rescuing the world's rainforests. It is the world's first eco-porn organization and may be the only porn website specifically created to raise money for a cause. The group moved from Oslo, Norway, to Berlin, Germany, following the trial of its founders for having sex in public.
A sitemap is a list of pages of a web site within a domain.
hCalendar is a microformat standard for displaying a semantic (X)HTML representation of iCalendar-format calendar information about an event, on web pages, using HTML classes and rel attributes.
nofollow is a setting on a web page hyperlink that directs search engines not to use the link for page ranking calculations. It is specified in the page as a type of link relation; that is: <a rel="nofollow" ...>
. Because search engines often calculate a site's importance according to the number of hyperlinks from other sites, the nofollow
setting allows website authors to indicate that the presence of a link is not an endorsement of the target site's importance.
A search engine is a software system that provides hyperlinks to web pages and other relevant information on the Web in response to a user's query. The user inputs a query within a web browser or a mobile app, and the search results are often a list of hyperlinks, accompanied by textual summaries and images. Users also have the option of limiting the search to a specific type of results, such as images, videos, or news.
GenieKnows Inc. was a privately owned vertical search engine company based in Halifax, Nova Scotia. It was started by Rami Hamodah who also started SwiftlyLabs.com and Salesboom.com. Like many internet search engines, its revenue model centers on an online advertising platform and B2B transactions. It focuses on a set of search markets, or verticals, including health search, video games search, and local business directory search.
Ecocho was a search engine with the aim of offsetting carbon emissions by donating 70% of revenues to 'carbon offset credits'. The site launched on 14 April 2008.
LeapFish.com was a search aggregator that retrieved results from other portals and search engines, including Google, Bing and Yahoo!, and also search engines of blogs, videos etc. It was a registered trademark of Dotnext Inc, launched on 3 November 2008.
DeepPeep was a search engine that aimed to crawl and index every database on the public Web. Unlike traditional search engines, which crawl existing webpages and their hyperlinks, DeepPeep aimed to allow access to the so-called Deep web, World Wide Web content only available via for instance typed queries into databases. The project started at the University of Utah and was overseen by Juliana Freire, an associate professor at the university's School of Computing WebDB group. The goal was to make 90% of all WWW content accessible, according to Freire. The project ran a beta search engine and was sponsored by the University of Utah and a $243,000 grant from the National Science Foundation. It generated worldwide interest.
SearchMe was a visual search engine based in Mountain View, California. It organized search results as snapshots of web pages — an interface similar to that of the iPhone's and iTunes's album selection.
A canonical link element is an HTML element that helps webmasters prevent duplicate content issues in search engine optimization by specifying the "canonical" or "preferred" version of a web page. It is described in RFC 6596, which went live in April 2012.
Schema.org is a reference website that publishes documentation and guidelines for using structured data mark-up on web-pages. Its main objective is to standardize HTML tags to be used by webmasters for creating rich results about a certain topic of interest. It is a part of the semantic web project, which aims to make document mark-up codes more readable and meaningful to both humans and machines.
This page provides a full timeline of web search engines, starting from the WHOis in 1982, the Archie search engine in 1990, and subsequent developments in the field. It is complementary to the history of web search engines page that provides more qualitative detail on the history.
Ecosia is a search engine based in Berlin, Germany. The company uses renewable energy to power its servers and invests its profits in tree-planting projects, aiming to absorb more CO2 than it emits.
Searx is a free and open-source metasearch engine, available under the GNU Affero General Public License version 3, with the aim of protecting the privacy of its users. To this end, Searx does not share users' IP addresses or search history with the search engines from which it gathers results. Tracking cookies served by the search engines are blocked, preventing user-profiling-based results modification. By default, Searx queries are submitted via HTTP POST, to prevent users' query keywords from appearing in webserver logs. Searx was inspired by the Seeks project, though it does not implement Seeks' peer-to-peer user-sourced results ranking.
Mojeek is a search engine based in the United Kingdom. The search results provided by Mojeek come from its own index of web pages, created by crawling the web.
{{cite web}}
: CS1 maint: unfit URL (link){{cite web}}
: CS1 maint: unfit URL (link)