Part of a series about |
Net neutrality |
---|
Topics and issues |
By country or region |
Search neutrality is a principle that search engines should have no editorial policies other than that their results be comprehensive, impartial and based solely on relevance. [1] This means that when a user types in a search engine query, the engine should return the most relevant results found in the provider's domain (those sites which the engine has knowledge of), without manipulating the order of the results (except to rank them by relevance), excluding results, or in any other way manipulating the results to a certain bias.
Search neutrality is related to network neutrality in that they both aim to keep any one organization from limiting or altering a user's access to services on the Internet. Search neutrality aims to keep the organic search results (results returned because of their relevance to the search terms, as opposed to results sponsored by advertising) of a search engine free from any manipulation, while network neutrality aims to keep those who provide and govern access to the Internet from limiting the availability of resources to access any given content.
The term "search neutrality" in context of the internet appears as early as March 2009 in an academic paper by the Polish-American mathematician Andrew Odlyzko titled, "Network Neutrality, Search Neutrality, and the Never-ending Conflict between Efficiency and Fairness in Markets". [2] In this paper, Odlykzo predicts that if net neutrality were to be accepted as a legal or regulatory principle, then the questions surrounding search neutrality would be the next controversies. Indeed, in December 2009 the New York Times published an opinion letter by Foundem co-founder and lead complainant in an anti-trust complaint against Google, Adam Raff, which likely brought the term to the broader public. According to Raff in his opinion letter, search neutrality ought to be "the principle that search engines should have no editorial policies other than that their results be comprehensive, impartial and based solely on relevance". [1] On October 11, 2009, Adam and his wife Shivaun launched SearchNeutrality.org, an initiative dedicated to promoting investigations against Google's search engine practices. [3] There, the Raffs note that they chose to frame their issue with Google as "search neutrality" in order to benefit from the focus and interest on net neutrality. [3]
In contrast to net neutrality, answers to such questions, as "what is search neutrality?" or "what are appropriate legislative or regulatory principles to protect search neutrality?", appear to have less consensus. The idea that neutrality means equal treatment, regardless of the content, comes from debates on net neutrality. [4] Neutrality in search is complicated by the fact that search engines, by design and in implementation, are not intended to be neutral or impartial. Rather, search engines and other information retrieval applications are designed to collect and store information (indexing), receive a query from a user, search for and filter relevant information based on that query (searching/filtering), and then present the user with only a subset of those results, which are ranked from most relevant to least relevant (ranking). "Relevance" is a form of bias used to favor some results and rank those favored results. Relevance is defined in the search engine so that a user is satisfied with the results and is therefore subject to the user's preferences. And because relevance is so subjective, putting search neutrality into practice has been so contentious.
Search neutrality became a concern after search engines, most notably Google, were accused of search bias by other companies. [5] Competitors and companies claim search engines systematically favor some sites (and some kind of sites) over others in their lists of results, disrupting the objective results users believe they are getting. [6]
The call for search neutrality goes beyond traditional search engines. Sites like Amazon.com and Facebook are also accused of skewing results. [7] Amazon's search results are influenced by companies that pay to rank higher in their search results while Facebook filters their newsfeed lists to conduct social experiments. [7]
In order to find information on the Web, most users make use of search engines, which crawl the web, index it and show a list of results ordered by relevance. The use of search engines to access information through the web has become a key factor for online businesses, which depend on the flow of users visiting their pages. [8] One of these companies is Foundem. Foundem provides a "vertical search" service to compare products available on online markets for the U.K. Many people see these "vertical search" sites as spam. [9] Beginning in 2006 and for three and a half years following, Foundem's traffic and business dropped significantly due to what they assert to be a penalty deliberately applied by Google. [10] It is unclear, however, whether their claim of a penalty was self-imposed via their use of iframe HTML tags to embed the content from other websites. At the time at which Foundem claims the penalties were imposed, it was unclear whether web crawlers crawled beyond the main page of a website using iframe tags without some extra modifications. The former SEO director OMD UK, Jaamit Durrani, among others, offered this alternative explanation, stating that “Two of the major issues that Foundem had in summer was content in iFrames and content requiring javascript to load – both of which I looked at in August, and they were definitely in place. Both are huge barriers to search visibility in my book. They have been fixed somewhere between then and the lifting of the supposed ‘penalty’. I don't think that's a coincidence.” [11]
Most of Foundem’s accusations claim that Google deliberately applies penalties to other vertical search engines because they represent competition. [12] Foundem is backed by a Microsoft proxy group, the 'Initiative for Competitive Online Marketplace'. [13]
The following table details Foundem's chronology of events as found on their website: [14]
Date | Event |
---|---|
June 2006 | Foundem's Google search penalty begins. Foundem starts an arduous campaign to have the penalty lifted. |
August 2006 | Foundem's AdWord penalty begins. Foundem starts an arduous campaign to have the penalty lifted. |
August 2007 | Teleconference with Google AdWords Quality Team representative. |
September 2007 | Foundem is "whitelisted" for AdWords (i.e. Google manually grants Foundem immunity from its AdWords penalty). |
January 2009 | Foundem starts "public" campaign to raise awareness of this new breed of penalty and manual whitelisting. |
April 2009 | First meeting with ICOMP. |
October 2009 | Teleconference with Google Search Quality Team representative, beginning a detailed dialogue between Foundem and Google. |
December 2009 | Foundem is "whitelisted" for Google natural search (i.e. Google manually grants Foundem immunity from its search penalty). |
Google's large market share (85%) has made them a target for search neutrality litigation via antitrust laws. [15] In February 2010, Google released an article on the Google Public Policy blog expressing their concern for fair competition, when other companies at the UK joined Foundem's cause (eJustice.fr, and Microsoft's Ciao! from Bing) also claiming being unfairly penalized by Google. [12]
After two years of looking into claims that Google “manipulated its search algorithms to harm vertical websites and unfairly promote its own competing vertical properties,” the Federal Trade Commission (FTC) voted unanimously to end the antitrust portion of its investigation without filing a formal complaint against Google. [16] The FTC concluded that Google's “practice of favoring its own content in the presentation of search results” did not violate U.S. antitrust laws. [5] The FTC further determined that even though competitors might be negatively impacted by Google's changing algorithms, Google did not change its algorithms to hurt competitors, but as a product improvement to benefit consumers. [5]
There are a number of arguments for and against search neutrality.
According to the Net Neutrality Institute, as of 2018, Google’s "Universal Search" system [21] uses by far the least neutral search engine practices, and following the implementation of Universal Search, websites such as MapQuest experienced a massive decline in web traffic. This decline has been attributed to Google linking to its own services rather than the services offered at external websites. [22] [23] Despite these claims, Microsoft's Bing displays Microsoft content in first place more than twice as often as Google shows Google content in first place. This indicates that as far as there is any 'bias', Google is less biased than its principal competitor. [24]
Google Search is a search engine operated by Google. It allows users to search for information on the Web by entering keywords or phrases. Google Search uses algorithms to analyze and rank websites based on their relevance to the search query. It is the most popular search engine worldwide.
In computing, a search engine is an information retrieval software system designed to help find information stored on one or more computer systems. Search engines discover, crawl, transform, and store information for retrieval and presentation in response to user queries. The search results are usually presented in a list and are commonly called hits. The most widely used type of search engine is a web search engine, which searches for information on the World Wide Web.
Spamdexing is the deliberate manipulation of search engine indexes. It involves a number of methods, such as link building and repeating related and/or unrelated phrases, to manipulate the relevance or prominence of resources indexed in a manner inconsistent with the purpose of the indexing system.
Search engine optimization (SEO) is the process of improving the quality and quantity of website traffic to a website or a web page from search engines. SEO targets unpaid search traffic rather than direct traffic, referral traffic, social media traffic, or paid traffic.
On the World Wide Web, a link farm is any group of websites that all hyperlink to other sites in the group for the purpose of increasing SEO rankings. In graph theoretic terms, a link farm is a clique. Although some link farms can be created by hand, most are created through automated programs and services. A link farm is a form of spamming the index of a web search engine. Other link exchange systems are designed to allow individual websites to selectively exchange links with other relevant websites, and are not considered a form of spamdexing.
Internet research is the practice of using Internet information, especially free information on the World Wide Web, or Internet-based resources in research.
Spam in blogs is a form of spamdexing which utilizes internet sites that allow content to be publicly posted, in order to artificially inflate their website ranking by linking back to their web pages. Backlinking helps search algorithms determine the popularity of a web page, which plays a major role for search engines like Google and Microsoft Bing to decide a web page ranking on a certain search query. This helps the spammer's website to list ahead of other sites for certain searches, which helps them to increase the number of visitors to their website.
A metasearch engine is an online information retrieval tool that uses the data of a web search engine to produce its own results. Metasearch engines take input from a user and immediately query search engines for results. Sufficient data is gathered, ranked, and presented to the users.
From the point of view of a given web resource (referent), a backlink is a regular hyperlink on another web resource that points to the referent. A web resource may be a website, web page, or web directory.
The sandbox effect is a theory about the way Google ranks web pages in its index. It is the subject of much debate—its existence has been written about since 2004, but not confirmed, with several statements to the contrary.
A search engine is a software system that provides hyperlinks to web pages and other relevant information on the Web in response to a user's query. The user inputs a query within a web browser or a mobile app, and the search results are often a list of hyperlinks, accompanied by textual summaries and images. Users also have the option of limiting the search to a specific type of results, such as images, videos, or news.
A search engine results page (SERP) is a webpage that is displayed by a search engine in response to a query by a user. The main component of a SERP is the listing of results that are returned by the search engine in response to a keyword query.
Google Personalized Search is a personalized search feature of Google Search, introduced in 2004. All searches on Google Search are associated with a browser cookie record. When a user performs a search, the search results are not only based on the relevance of each web page to the search term, but also on which websites the user visited through previous search results. This provides a more personalized experience that can increase the relevance of the search results for the particular user. Such filtering may also have side effects, such as the creation of a filter bubble.
Social search is a behavior of retrieving and searching on a social searching engine that mainly searches user-generated content such as news, videos and images related search queries on social media like Facebook, LinkedIn, Twitter, Instagram and Flickr. It is an enhanced version of web search that combines traditional algorithms. The idea behind social search is that instead of ranking search results purely based on semantic relevance between a query and the results, a social search system also takes into account social relationships between the results and the searcher. The social relationships could be in various forms. For example, in LinkedIn people search engine, the social relationships include social connections between searcher and each result, whether or not they are in the same industries, work for the same companies, belong the same social groups, and go the same schools, etc.
Image meta search is a type of search engine specialised on finding pictures, images, animations etc. Like the text search, image search is an information retrieval system designed to help to find information on the Internet and it allows the user to look for images etc. using keywords or search phrases and to receive a set of thumbnail images, sorted by relevancy.
In the field of search engine optimization (SEO), link building describes actions aimed at increasing the number and quality of inbound links to a webpage with the goal of increasing the search engine rankings of that page or website. Briefly, link building is the process of establishing relevant hyperlinks to a website from external sites. Link building can increase the number of high-quality links pointing to a website, in turn increasing the likelihood of the website ranking highly in search engine results. Link building is also a proven marketing tactic for increasing brand awareness.
Personalized search is a web search tailored specifically to an individual's interests by incorporating information about the individual beyond the specific query provided. There are two general approaches to personalizing search results, involving modifying the user's query and re-ranking search results.
A content farm or content mill is a company that employs freelance creators or uses automated tools to generate a large amount of web content which is specifically designed to satisfy algorithms for maximal retrieval by search engines, known as SEO. Their main goal is to generate advertising revenue through attracting page views, as first exposed in the context of social spam.
Google Penguin is a codename for a Google algorithm update that was first announced on April 24, 2012. The update was aimed at decreasing search engine rankings of websites that violate Google's Webmaster Guidelines by using now declared Grey Hat SEM techniques involved in increasing artificially the ranking of a webpage by manipulating the number of links pointing to the page. Such tactics are commonly described as link schemes. According to Google's John Mueller, as of 2013, Google announced all updates to the Penguin filter to the public.
Local search engine optimization is similar to (national) SEO in that it is also a process affecting the visibility of a website or a web page in a web search engine's unpaid results often referred to as "natural", "organic", or "earned" results. In general, the higher ranked on the search results page and more frequently a site appears in the search results list, the more visitors it will receive from the search engine's users; these visitors can then be converted into customers. Local SEO, however, differs in that it is focused on optimizing a business's online presence so that its web pages will be displayed by search engines when users enter local searches for its products or services. Ranking for local search involves a similar process to general SEO but includes some specific elements to rank a business for local search.
{{cite web}}
: CS1 maint: bot: original URL status unknown (link)Since Dec. 4, 2009, Google has been personalized for everyone. So when I had two friends this spring Google "BP," one of them got a set of links that was about investment opportunities in BP. The other one got information about the oil spill....