A metasearch engine (or search aggregator ) is an online information retrieval tool that uses the data of a web search engine to produce its own results. [1] [2] Metasearch engines take input from a user and immediately query search engines [3] for results. Sufficient data is gathered, ranked, and presented to the users.
Problems such as spamming reduce the accuracy and precision of results. [4] The process of fusion aims to improve the engineering of a metasearch engine. [5]
Examples of metasearch engines include Skyscanner and Kayak.com, which aggregate search results of online travel agencies and provider websites, and Searx, a free and open-source search engine which aggregates results from internet search engines.
The first person to incorporate the idea of meta searching was Daniel Dreilinger of Colorado State University. He developed SearchSavvy, which let users search up to 20 different search engines and directories at once. Although fast, the search engine was restricted to simple searches and thus wasn't reliable. University of Washington student Eric Selberg released a more "updated" version called MetaCrawler. This search engine improved on SearchSavvy's accuracy by adding its own search syntax behind the scenes, and matching the syntax to that of the search engines it was probing. Metacrawler reduced the amount of search engines queried to 6, but although it produced more accurate results, it still wasn't considered as accurate as searching a query in an individual engine. [6]
On May 20, 1996, HotBot, then owned by Wired, was a search engine with search results coming from the Inktomi and Direct Hit databases. It was known for its fast results and as a search engine with the ability to search within search results. Upon being bought by Lycos in 1998, development for the search engine staggered and its market share fell drastically. After going through a few alterations, HotBot was redesigned into a simplified search interface, with its features being incorporated into Lycos' website redesign. [7]
A metasearch engine called Anvish was developed by Bo Shu and Subhash Kak in 1999; the search results were sorted using instantaneously trained neural networks. [8] This was later incorporated into another metasearch engine called Solosearch. [9]
In August 2000, India got its first meta search engine when HumHaiIndia.com was launched. [10] It was developed by the then 16 year old Sumeet Lamba. [11] The website was later rebranded as Tazaa.com. [12]
Ixquick is a search engine known for its privacy policy statement. Developed and launched in 1998 by David Bodnick, it is owned by Surfboard Holding BV. In June 2006, Ixquick began to delete private details of its users following the same process with Scroogle. Ixquick's privacy policy includes no recording of users' IP addresses, no identifying cookies, no collection of personal data, and no sharing of personal data with third parties. [13] It also uses a unique ranking system where a result is ranked by stars. The more stars in a result, the more search engines agreed on the result.
In April 2005, Dogpile, then owned and operated by InfoSpace, Inc., collaborated with researchers from the University of Pittsburgh and Pennsylvania State University to measure the overlap and ranking differences of leading Web search engines in order to gauge the benefits of using a metasearch engine to search the web. Results found that from 10,316 random user-defined queries from Google, Yahoo!, and Ask Jeeves, only 3.2% of first page search results were the same across those search engines for a given query. Another study later that year using 12,570 random user-defined queries from Google, Yahoo!, MSN Search, and Ask Jeeves found that only 1.1% of first page search results were the same across those search engines for a given query. [14]
By sending multiple queries to several other search engines this extends the coverage data of the topic and allows more information to be found. They use the indexes built by other search engines, aggregating and often post-processing results in unique ways. A metasearch engine has an advantage over a single search engine because more results can be retrieved with the same amount of exertion. [2] It also reduces the work of users from having to individually type in searches from different engines to look for resources. [2]
Metasearching is also a useful approach if the purpose of the user's search is to get an overview of the topic or to get quick answers. Instead of having to go through multiple search engines like Yahoo! or Google and comparing results, metasearch engines are able to quickly compile and combine results. They can do it either by listing results from each engine queried with no additional post-processing (Dogpile) or by analyzing the results and ranking them by their own rules (IxQuick, Metacrawler, and Vivismo).
A metasearch engine can also hide the searcher's IP address from the search engines queried thus providing privacy to the search.
Metasearch engines are not capable of parsing query forms or able to fully translate query syntax. The number of hyperlinks generated by metasearch engines are limited, and therefore do not provide the user with the complete results of a query. [15]
The majority of metasearch engines do not provide over ten linked files from a single search engine, and generally do not interact with larger search engines for results. Pay per click links are prioritised and are normally displayed first. [16]
Metasearching also gives the illusion that there is more coverage of the topic queried, particularly if the user is searching for popular or commonplace information. It's common to end with multiple identical results from the queried engines. It is also harder for users to search with advanced search syntax to be sent with the query, so results may not be as precise as when a user is using an advanced search interface at a specific engine. This results in many metasearch engines using simple searching. [17]
A metasearch engine accepts a single search request from the user. This search request is then passed on to another search engine's database. A metasearch engine does not create a database of web pages but generates a Federated database system of data integration from multiple sources. [18] [19] [20]
Since every search engine is unique and has different algorithms for generating ranked data, duplicates will therefore also be generated. To remove duplicates, a metasearch engine processes this data and applies its own algorithm. A revised list is produced as an output for the user.[ citation needed ] When a metasearch engine contacts other search engines, these search engines will respond in three ways:
Web pages that are highly ranked on many search engines are likely to be more relevant in providing useful information. [21] However, all search engines have different ranking scores for each website and most of the time these scores are not the same. This is because search engines prioritise different criteria and methods for scoring, hence a website might appear highly ranked on one search engine and lowly ranked on another. This is a problem because Metasearch engines rely heavily on the consistency of this data to generate reliable accounts. [21]
A metasearch engine uses the process of Fusion to filter data for more efficient results. The two main fusion methods used are: Collection Fusion and Data Fusion.
This section may contain material not related to the topic of the article .(May 2024) |
Spamdexing is the deliberate manipulation of search engine indexes. It uses a number of methods to manipulate the relevance or prominence of resources indexed in a manner unaligned with the intention of the indexing system. Spamdexing can be very distressing for users and problematic for search engines because the return contents of searches have poor precision.[ citation needed ] This will eventually result in the search engine becoming unreliable and not dependable for the user. To tackle Spamdexing, search robot algorithms are made more complex and are changed almost every day to eliminate the problem. [24]
It is a major problem for metasearch engines because it tampers with the Web crawler's indexing criteria, which are heavily relied upon to format ranking lists. Spamdexing manipulates the natural ranking system of a search engine, and places websites higher on the ranking list than they would naturally be placed. [25] There are three primary methods used to achieve this:
Content spam are the techniques that alter the logical view that a search engine has over the page's contents. Techniques include:
Link spam are links between pages present for reasons other than merit. Techniques include:
This is an SEO technique in which different materials and information are sent to the web crawler and to the web browser. [26] It is commonly used as a spamdexing technique because it can trick search engines into either visiting a site that is substantially different from the search engine description or giving a certain site a higher ranking.
Meta elements are tags used in HTML and XHTML documents to provide structured metadata about a Web page. They are part of a web page's head
section. Multiple Meta elements with different attributes can be used on the same page. Meta elements can be used to specify page description, keywords and any other metadata not provided through the other head
elements and attributes.
In computing, a search engine is an information retrieval software system designed to help find information stored on one or more computer systems. Search engines discover, crawl, transform, and store information for retrieval and presentation in response to user queries. The search results are usually presented in a list and are commonly called hits. The most widely used type of search engine is a web search engine, which searches for information on the World Wide Web.
Spamdexing is the deliberate manipulation of search engine indexes. It involves a number of methods, such as link building and repeating unrelated phrases, to manipulate the relevance or prominence of resources indexed in a manner inconsistent with the purpose of the indexing system.
Search engine optimization (SEO) is the process of improving the quality and quantity of website traffic to a website or a web page from search engines. SEO targets unpaid traffic rather than direct traffic or paid traffic. Unpaid traffic may originate from different kinds of searches, including image search, video search, academic search, news search, and industry-specific vertical search engines.
On the World Wide Web, a link farm is any group of websites that all hyperlink to other sites in the group for the purpose of increasing SEO rankings. In graph theoretic terms, a link farm is a clique. Although some link farms can be created by hand, most are created through automated programs and services. A link farm is a form of spamming the index of a web search engine. Other link exchange systems are designed to allow individual websites to selectively exchange links with other relevant websites, and are not considered a form of spamdexing.
Internet research is the practice of using Internet information, especially free information on the World Wide Web, or Internet-based resources in research.
The deep web, invisible web, or hidden web are parts of the World Wide Web whose contents are not indexed by standard web search-engine programs. This is in contrast to the "surface web", which is accessible to anyone using the Internet. Computer scientist Michael K. Bergman is credited with inventing the term in 2001 as a search-indexing term.
Spam in blogs is a form of spamdexing which utilizes internet sites that allow content to be publicly posted, in order to artificially inflate their website ranking by linking back to their web pages. Backlinking helps search algorithms determine the popularity of a web page, which plays a major role for search engines like Google and Microsoft Bing to decide a web page ranking on a certain search query. This helps the spammer's website to list ahead of other sites for certain searches, which helps them to increase the number of visitors to their website.
Dogpile is a metasearch engine for information on the World Wide Web that fetches results from Google, Yahoo!, Yandex, Bing, and other popular search engines, including those from audio and video content providers such as Yahoo!.
Relative to some web resource, a backlink is a link from some other website to that web resource. A web resource may be a website, web page, or web directory.
Doorway pages are web pages that are created for the deliberate manipulation of search engine indexes (spamdexing). A doorway page will affect the index of a search engine by inserting results for particular phrases while sending visitors to a different page. Doorway pages that redirect visitors without their knowledge use some form of cloaking. This usually falls under Black Hat SEO.
Federated search retrieves information from a variety of sources via a search application built on top of one or more search engines. A user makes a single query request which is distributed to the search engines, databases or other query engines participating in the federation. The federated search then aggregates the results that are received from the search engines for presentation to the user. Federated search can be used to integrate disparate information resources within a single large organization ("enterprise") or for the entire web.
A scraper site is a website that copies content from other websites using web scraping. The content is then mirrored with the goal of creating revenue, usually through advertising and sometimes by selling user data.
A search engine is a software system that provides hyperlinks to web pages and other relevant information on the Web in response to a user's query. The user inputs a query within a web browser or a mobile app, and the search results are often a list of hyperlinks, accompanied by textual summaries and images. Users also have the option of limiting the search to a specific type of results, such as images, videos, or news.
Search engine indexing is the collecting, parsing, and storing of data to facilitate fast and accurate information retrieval. Index design incorporates interdisciplinary concepts from linguistics, cognitive psychology, mathematics, informatics, and computer science. An alternate name for the process, in the context of search engines designed to find web pages on the Internet, is web indexing.
A Web query topic classification/categorization is a problem in information science. The task is to assign a Web search query to one or more predefined categories, based on its topics. The importance of query classification is underscored by many services provided by Web search. A direct application is to provide better search result pages for users with interests of different categories. For example, the users issuing a Web query "apple" might expect to see Web pages related to the fruit apple, or they may prefer to see products or news related to the computer company. Online advertisement services can rely on the query classification results to promote different products more accurately. Search result pages can be grouped according to the categories predicted by a query classification algorithm. However, the computation of query classification is non-trivial. Different from the document classification tasks, queries submitted by Web search users are usually short and ambiguous; also the meanings of the queries are evolving over time. Therefore, query topic classification is much more difficult than traditional document classification tasks.
LeapFish.com was a search aggregator that retrieved results from other portals and search engines, including Google, Bing and Yahoo!, and also search engines of blogs, videos etc. It was a registered trademark of Dotnext Inc, launched on 3 November 2008.
DeepPeep was a search engine that aimed to crawl and index every database on the public Web. Unlike traditional search engines, which crawl existing webpages and their hyperlinks, DeepPeep aimed to allow access to the so-called Deep web, World Wide Web content only available via for instance typed queries into databases. The project started at the University of Utah and was overseen by Juliana Freire, an associate professor at the university's School of Computing WebDB group. The goal was to make 90% of all WWW content accessible, according to Freire. The project ran a beta search engine and was sponsored by the University of Utah and a $243,000 grant from the National Science Foundation. It generated worldwide interest.
Yebol was a vertical "decision" search engine that had developed a knowledge-based, semantic search platform. Based in San Jose, California, Yebol's artificial intelligence human intelligence-infused algorithms automatically cluster and categorize search results, web sites, pages and contents that it presents in a visually indexed format that is more aligned with initial human intent. Yebol used association, ranking and clustering algorithms to analyze related keywords or web pages. Yebol presented as one of its goals the creation of a unique "homepage look" for every possible search term.
Local search engine optimization is similar to (national) SEO in that it is also a process affecting the visibility of a website or a web page in a web search engine's unpaid results often referred to as "natural", "organic", or "earned" results. In general, the higher ranked on the search results page and more frequently a site appears in the search results list, the more visitors it will receive from the search engine's users; these visitors can then be converted into customers. Local SEO, however, differs in that it is focused on optimizing a business's online presence so that its web pages will be displayed by search engines when users enter local searches for its products or services. Ranking for local search involves a similar process to general SEO but includes some specific elements to rank a business for local search.