Search neutrality

Last updated

Search neutrality is a principle that search engines should have no editorial policies other than that their results be comprehensive, impartial and based solely on relevance. [1] This means that when a user types in a search engine query, the engine should return the most relevant results found in the provider's domain (those sites which the engine has knowledge of), without manipulating the order of the results (except to rank them by relevance), excluding results, or in any other way manipulating the results to a certain bias.

Contents

Search neutrality is related to network neutrality in that they both aim to keep any one organization from limiting or altering a user's access to services on the Internet. Search neutrality aims to keep the organic search results (results returned because of their relevance to the search terms, as opposed to results sponsored by advertising) of a search engine free from any manipulation, while network neutrality aims to keep those who provide and govern access to the Internet from limiting the availability of resources to access any given content.

Background

The term "search neutrality" in context of the internet appears as early as March 2009 in an academic paper by the Polish-American mathematician Andrew Odlyzko titled, "Network Neutrality, Search Neutrality, and the Never-ending Conflict between Efficiency and Fairness in Markets". [2] In this paper, Odlykzo predicts that if net neutrality were to be accepted as a legal or regulatory principle, then the questions surrounding search neutrality would be the next controversies. Indeed, in December 2009 the New York Times published an opinion letter by Foundem co-founder and lead complainant in an anti-trust complaint against Google, Adam Raff, which likely brought the term to the broader public. According to Raff in his opinion letter, search neutrality ought to be "the principle that search engines should have no editorial policies other than that their results be comprehensive, impartial and based solely on relevance". [1] On October 11, 2009, Adam and his wife Shivaun launched SearchNeutrality.org, an initiative dedicated to promoting investigations against Google's search engine practices. [3] There, the Raffs note that they chose to frame their issue with Google as "search neutrality" in order to benefit from the focus and interest on net neutrality. [3]

In contrast to net neutrality, answers to such questions, as "what is search neutrality?" or "what are appropriate legislative or regulatory principles to protect search neutrality?", appear to have less consensus. The idea that neutrality means equal treatment, regardless of the content, comes from debates on net neutrality. [4] Neutrality in search is complicated by the fact that search engines, by design and in implementation, are not intended to be neutral or impartial. Rather, search engines and other information retrieval applications are designed to collect and store information (indexing), receive a query from a user, search for and filter relevant information based on that query (searching/filtering), and then present the user with only a subset of those results, which are ranked from most relevant to least relevant (ranking). "Relevance" is a form of bias used to favor some results and rank those favored results. Relevance is defined in the search engine so that a user is satisfied with the results and is therefore subject to the user's preferences. And because relevance is so subjective, putting search neutrality into practice has been so contentious.

Search neutrality became a concern after search engines, most notably Google, were accused of search bias by other companies. [5] Competitors and companies claim search engines systematically favor some sites (and some kind of sites) over others in their lists of results, disrupting the objective results users believe they are getting. [6]

The call for search neutrality goes beyond traditional search engines. Sites like Amazon.com and Facebook are also accused of skewing results. [7] Amazon's search results are influenced by companies that pay to rank higher in their search results while Facebook filters their newsfeed lists to conduct social experiments. [7]

"Vertical search" spam penalties

In order to find information on the Web, most users make use of search engines, which crawl the web, index it and show a list of results ordered by relevance. The use of search engines to access information through the web has become a key factor for online businesses, which depend on the flow of users visiting their pages. [8] One of these companies is Foundem. Foundem provides a "vertical search" service to compare products available on online markets for the U.K. Many people see these "vertical search" sites as spam. [9] Beginning in 2006 and for three and a half years following, Foundem's traffic and business dropped significantly due to what they assert to be a penalty deliberately applied by Google. [10] It is unclear, however, whether their claim of a penalty was self-imposed via their use of iframe HTML tags to embed the content from other websites. At the time at which Foundem claims the penalties were imposed, it was unclear whether web crawlers crawled beyond the main page of a website using iframe tags without some extra modifications. The former SEO director OMD UK, Jaamit Durrani, among others, offered this alternative explanation, stating that “Two of the major issues that Foundem had in summer was content in iFrames and content requiring javascript to load – both of which I looked at in August, and they were definitely in place. Both are huge barriers to search visibility in my book. They have been fixed somewhere between then and the lifting of the supposed ‘penalty’. I don't think that's a coincidence.” [11]

Most of Foundem’s accusations claim that Google deliberately applies penalties to other vertical search engines because they represent competition. [12] Foundem is backed by a Microsoft proxy group, the 'Initiative for Competitive Online Marketplace'. [13]

The Foundem’s case chronology

The following table details Foundem's chronology of events as found on their website: [14]

DateEvent
June 2006Foundem's Google search penalty begins. Foundem starts an arduous campaign to have the penalty lifted.
August 2006Foundem's AdWord penalty begins. Foundem starts an arduous campaign to have the penalty lifted.
August 2007Teleconference with Google AdWords Quality Team representative.
September 2007Foundem is "whitelisted" for AdWords (i.e. Google manually grants Foundem immunity from its AdWords penalty).
January 2009Foundem starts "public" campaign to raise awareness of this new breed of penalty and manual whitelisting.
April 2009First meeting with ICOMP.
October 2009Teleconference with Google Search Quality Team representative, beginning a detailed dialogue between Foundem and Google.
December 2009Foundem is "whitelisted" for Google natural search (i.e. Google manually grants Foundem immunity from its search penalty).

Other cases

Google's large market share (85%) has made them a target for search neutrality litigation via antitrust laws. [15] In February 2010, Google released an article on the Google Public Policy blog expressing their concern for fair competition, when other companies at the UK joined Foundem's cause (eJustice.fr, and Microsoft's Ciao! from Bing) also claiming being unfairly penalized by Google. [12]

The FTC’s Investigation into Allegations of Search Bias

After two years of looking into claims that Google “manipulated its search algorithms to harm vertical websites and unfairly promote its own competing vertical properties,” the Federal Trade Commission (FTC) voted unanimously to end the antitrust portion of its investigation without filing a formal complaint against Google. [16] The FTC concluded that Google's “practice of favoring its own content in the presentation of search results” did not violate U.S. antitrust laws. [5] The FTC further determined that even though competitors might be negatively impacted by Google's changing algorithms, Google did not change its algorithms to hurt competitors, but as a product improvement to benefit consumers. [5]

Arguments

There are a number of arguments for and against search neutrality.

Pros

Cons

According to the Net Neutrality Institute, as of 2018, Google’s "Universal Search" system [21] uses by far the least neutral search engine practices, and following the implementation of Universal Search, websites such as MapQuest experienced a massive decline in web traffic. This decline has been attributed to Google linking to its own services rather than the services offered at external websites. [22] [23] Despite these claims, Microsoft's Bing displays Microsoft content in first place more than twice as often as Google shows Google content in first place. This indicates that as far as there is any 'bias', Google is less biased than its principal competitor. [24]

Related Research Articles

<span class="mw-page-title-main">Google Search</span> Search engine from Google

Google Search is a search engine provided and operated by Google. Handling more than 3.5 billion searches per day, it has a 92% share of the global search engine market. It is the most-visited website in the world. Additionally, it is the most searched and used search engine in the entire world.

In general computing, a search engine is an information retrieval system designed to help find information stored on a computer system. It is an information retrieval software program that discovers, crawls, transforms, and stores information for retrieval and presentation in response to user queries. The search results are usually presented in a list and are commonly called hits. A search engine normally consists of four components, as follows: a search interface, a crawler, an indexer, and a database. The crawler traverses a document collection, deconstructs document text, and assigns surrogates for storage in the search engine index. Online search engines store images, link data and metadata for the document as well.

Spamdexing is the deliberate manipulation of search engine indexes. It involves a number of methods, such as link building and repeating unrelated phrases, to manipulate the relevance or prominence of resources indexed in a manner inconsistent with the purpose of the indexing system.

Search engine optimization (SEO) is the process of improving the quality and quantity of website traffic to a website or a web page from search engines. SEO targets unpaid traffic rather than direct traffic or paid traffic. Unpaid traffic may originate from different kinds of searches, including image search, video search, academic search, news search, and industry-specific vertical search engines.

<span class="mw-page-title-main">Link farm</span> Group of websites that link to each other

On the World Wide Web, a link farm is any group of websites that all hyperlink to other sites in the group for the purpose of increasing SEO rankings. In graph theoretic terms, a link farm is a clique. Although some link farms can be created by hand, most are created through automated programs and services. A link farm is a form of spamming the index of a web search engine. Other link exchange systems are designed to allow individual websites to selectively exchange links with other relevant websites, and are not considered a form of spamdexing.

Internet research is the practice of using Internet information, especially free information on the World Wide Web, or Internet-based resources in research.

Spam in blogs is a form of spamdexing which utilizes internet sites which allow content to be publicly posted, in order to artificially inflate their website ranking by linking back to their web pages. Backlink helps search algorithms determine the popularity of a web page, which plays a major role for search engines like Google and Microsoft Bing to decide a web page ranking on a certain search query. This helps the spammer's website to list ahead of other sites for certain searches, which helps them to increase the number of visitors to their website.

<span class="mw-page-title-main">Metasearch engine</span> Online information retrieval tool

A metasearch engine is an online information retrieval tool that uses the data of a web search engine to produce its own results. Metasearch engines take input from a user and immediately query search engines for results. Sufficient data is gathered, ranked, and presented to the users.

A backlink is a link from some other website to that web resource. A web resource may be a website, web page, or web directory.

Keyword stuffing is a search engine optimization (SEO) technique, considered webspam or spamdexing, in which keywords are loaded into a web page's meta tags, visible content, or backlink anchor text in an attempt to gain an unfair rank advantage in search engines. Keyword stuffing may lead to a website being temporarily or permanently banned or penalized on major search engines. The repetition of words in meta tags may explain why many search engines no longer use these tags. Nowadays, search engines focus more on the content that is unique, comprehensive, relevant, and helpful that overall makes the quality better which makes keyword stuffing useless, but it is still practiced by many webmasters.

<span class="mw-page-title-main">Search engine</span> Software system that is designed to search for information on the World Wide Web

A search engine is a software system that finds web pages that match a web search. They search the World Wide Web in a systematic way for particular information specified in a textual web search query. The search results are generally presented in a line of results, often referred to as search engine results pages (SERPs). The information may be a mix of hyperlinks to web pages, images, videos, infographics, articles, and other types of files. Some search engines also mine data available in databases or open directories. Unlike web directories and social bookmarking sites, which are maintained by human editors, search engines also maintain real-time information by running an algorithm on a web crawler. Any internet-based content that cannot be indexed and searched by a web search engine falls under the category of deep web. Modern search engines are based on techniques and methods developed in the field of Information retrieval.

Search Engine Results Pages (SERP) are the pages displayed by search engines in response to a query by a user. The main component of the SERP is the listing of results that are returned by the search engine in response to a keyword query.

Google Personalized Search is a personalized search feature of Google Search, introduced in 2004. All searches on Google Search are associated with a browser cookie record. When a user performs a search, the search results are not only based on the relevance of each web page to the search term, but also on which websites the user visited through previous search results. This provides a more personalized experience that can increase the relevance of the search results for the particular user. Such filtering may also have side effects, such as the creation of a filter bubble.

Social search is a behavior of retrieving and searching on a social searching engine that mainly searches user-generated content such as news, videos and images related search queries on social media like Facebook, LinkedIn, Twitter, Instagram and Flickr. It is an enhanced version of web search that combines traditional algorithms. The idea behind social search is that instead of ranking search results purely based on semantic relevance between a query and the results, a social search system also takes into account social relationships between the results and the searcher. The social relationships could be in various forms. For example, in LinkedIn people search engine, the social relationships include social connections between searcher and each result, whether or not they are in the same industries, work for the same companies, belong the same social groups, and go the same schools, etc.

In the field of search engine optimization (SEO), link building describes actions aimed at increasing the number and quality of inbound links to a webpage with the goal of increasing the search engine rankings of that page or website. Briefly, link building is the process of establishing relevant hyperlinks to a website from external sites. Link building can increase the number of high-quality links pointing to a website, in turn increasing the likelihood of the website ranking highly in search engine results. Link building is also a proven marketing tactic for increasing brand awareness.

Personalized search is a web search tailored specifically to an individual's interests by incorporating information about the individual beyond the specific query provided. There are two general approaches to personalizing search results, involving modifying the user's query and re-ranking search results.

A content farm or content mill is a company that employs large numbers of freelance writers or uses automated tools to generate a large amount of textual web content which is specifically designed to satisfy algorithms for maximal retrieval by search engines, known as SEO. Their main goal is to generate advertising revenue through attracting reader page views, as first exposed in the context of social spam.

The Initiative for a Competitive Online Marketplace or ICOMP is a lobbying organisation and based in London with a membership including various publishing and software companies. It exists to lobby legislators to take measures to increase competition in online advertising, to regulate the collection of information about online users and protect the rights of authors and publishers.

Google Penguin was a codename for a Google algorithm update that was first announced on April 24, 2012. The update was aimed at decreasing search engine rankings of websites that violate Google's Webmaster Guidelines by using now declared Grey Hat SEM techniques involved in increasing artificially the ranking of a webpage by manipulating the number of links pointing to the page. Such tactics are commonly described as link schemes. According to Google's John Mueller, as of 2013, Google announced all updates to the Penguin filter to the public.

Local search engine optimization is similar to (national) SEO in that it is also a process affecting the visibility of a website or a web page in a web search engine's unpaid results often referred to as "natural", "organic", or "earned" results. In general, the higher ranked on the search results page and more frequently a site appears in the search results list, the more visitors it will receive from the search engine's users; these visitors can then be converted into customers. Local SEO, however, differs in that it is focused on optimizing a business's online presence so that its web pages will be displayed by search engines when users enter local searches for its products or services. Ranking for local search involves a similar process to general SEO but includes some specific elements to rank a business for local search.

References

  1. 1 2 "Search, but You May Not Find". The New York Times. 2009. Retrieved March 3, 2011.
  2. Odlyzko, Andrew (March 2009). "Network Neutrality, Search Neutrality, and the Never-ending Conflict between Efficiency and Fairness in Markets" (PDF). Review of Network Economics: 40–60. Retrieved 4 July 2017.[ permanent dead link ]
  3. 1 2 "About SearchNeutrality.org". searchneutrality.org. Archived from the original on 4 August 2016. Retrieved 4 July 2017.{{cite web}}: CS1 maint: bot: original URL status unknown (link)
  4. Grimmelmann, James (17 January 2011). "Some Skepticism about Search Neutrality". The Next Digital Decade: Essays on the Future of the Internet: 435–461. SSRN   1742444.
  5. 1 2 3 Lao, Marina (July 2013). ""Neutral" Search As A Basis for Antitrust Action?" (PDF). Harvard Journal of Law & Technology Occasional Paper Series: 1–12. Archived from the original (PDF) on 5 September 2015. Retrieved 19 November 2014.
  6. Herman, Tavani (2014). Zalta, Edward N. (ed.). "Search Engines and Ethics". The Stanford Encyclopedia of Philosophy. Retrieved 20 November 2014.
  7. 1 2 Shavin, Naomi. "Are Google and Amazon the next threat to net neutrality?". Forbes.com. Retrieved 19 November 2014.
  8. "Search Engine Marketing". Marketing Today. 2005. Archived from the original on February 25, 2011. Retrieved March 2, 2011.
  9. Cade Metz (1 September 2011). "Antitrust nemesis accuses Google of 'WMD program'". The Register. The Register. Retrieved 2 August 2012.
  10. "Foundem's Google Story". www.searchneutrality.org. 2009. Archived from the original on 27 January 2010.
  11. Chris Lake (5 January 2010). "Foundem vs Google redux: it was a penalty! And search neutrality is at stake, dammit!". Archived from the original on 31 July 2017. Retrieved July 3, 2017.
  12. 1 2 "Committed to competing fairly". 2010. Retrieved March 2, 2011.
  13. Nancy Gohring (4 September 2010). "Texas Conducting Antitrust Review of Google". PC World. IDG Consumer and SMB. Retrieved 2 August 2012.
  14. "The Chronology". www.searchneutrality.org. 2010. Retrieved March 3, 2011.
  15. "Background to EU Formal Investigation". foundem. November 30, 2010. Retrieved February 13, 2011.
  16. "Google Agrees to Change Its Business Practices to Resolve FTC Competition Concerns In the Markets for Devices Like Smart Phones, Games and Tablets, and in Online Search". FTC.gov. Federal Trade Commission. July 3, 2013. Retrieved 20 November 2014.
  17. 1 2 "Search Neutrality as Disclosure and Auditing". Concurring Opinions. February 19, 2011. Retrieved March 3, 2011.
  18. "Search, but You May Not Find". Bucknell.edu. December 27, 2009. Retrieved February 13, 2010.
  19. 1 2 3 4 5 6 Grimmelmann, James (2010). "Some Skepticism About Search Neutrality". The Next Digital Decade: Essays on the Future of the Internet. TechFreedom.
  20. Parramore, Lynn (October 10, 2010). "The Filter Bubble". The Atlantic. Retrieved April 20, 2011. Since Dec. 4, 2009, Google has been personalized for everyone. So when I had two friends this spring Google "BP," one of them got a set of links that was about investment opportunities in BP. The other one got information about the oil spill....
  21. Danny Sullivan (16 May 2007). "Google 2.0: Google Universal Search". Search Engine Land. Third Door Media, Inc. Retrieved 2 August 2012.
  22. "Google Begins Move to Universal Search". May 17, 2007. Retrieved February 13, 2011.
  23. "Google Maps Gaining On Market Leader Mapquest". Search Engine Land. January 10, 2008. Retrieved February 13, 2011.
  24. Joshua D. Wright (3 November 2011). "Defining and Measuring Search Bias: Some Preliminary Evidence" (PDF). International Center for Law and Economics. Retrieved 2 August 2012.