An editor has nominated this article for deletion. You are welcome to participate in the deletion discussion , which will decide whether or not to retain it. |
This article may have misleading content.(November 2024) |
This article needs to be updated.(November 2024) |
The domain authority (also referred to as thought leadership) of a website describes its relevance for a specific subject area or industry. Domain Authority is a search engine ranking score developed by Moz. [1] This relevance has a direct impact on its ranking by search engines, trying to assess domain authority through automated analytic algorithms. The relevance of domain authority on website-listing in the Search Engine Results Page (SERPs) of search engines led to the birth of a whole industry of Black-Hat SEO providers, trying to feign an increased level of domain authority. [2] The ranking by major search engines, e.g., Google’s PageRank is agnostic of specific industry or subject areas and assesses a website in the context of the totality of websites on the Internet. [3] The results on the SERP page set the PageRank in the context of a specific keyword. In a less competitive subject area, even websites with a low PageRank can achieve high visibility in search engines, as the highest ranked sites that match specific search words are positioned on the first positions in the SERPs. [4]
Domain authority can be described through four dimensions:
The weight of these factors varies in function of the ranking body. When individuals judge domain authority, decisive factors can include the prestige of a website, the prestige of the contributing authors in a specific domain, the quality and relevance of the information on a website, the novelty of the content, but also the competitive situation around the discussed subject area or the quality of the outgoing links. [5] Several search engines (e.g., Bing, Google, Yahoo) have developed automated analyses and rank algorithms for domain authority. Lacking "human reasoning" which would allow to directly judge quality, they make use of complementary parameters such as information or website prestige and centrality from a graph-theoretical perspective, manifested in the quantity and quality of inbound links. [6] The software as a service company Moz.org has developed an algorithm and weighted level metric, branded as "Domain Authority", which gives predictions on a website's performance in search engine rankings with a discriminating range from 0 to 100. [7] [8]
Prestige identifies the prominent actors in a qualitative and quantitative manner on the basis of Graph theory. A website is considered a node. Its prestige is defined by the quantity of nodes that have directed edges pointing on the website and the quality of those nodes. The nodes’ quality is also defined through their prestige. This definition assures that a prestigious website is not only pointed at by many other websites, but that those pointing websites are prestigious themselves [9] Similar to the prestige of a website, the contributing authors’ prestige is taken into consideration, [10] in those cases, where the authors are named and identified (e.g., with their Twitter or Google Plus profile). In this case, prestige is measured with the prestige of the authors who quote them or refer to them and the quantity of referrals which these authors receive. [5] Search engines use additional factors to scrutinize the websites’ prestige. To do so, Google’s PageRank looks at factors like link-diversification and link-dynamics: When too many links are coming from the same domain or webmaster, there is a risk of black-hat SEO. When backlinks grow rapidly, this nourishes suspicion of spam or black-hat SEO as origin. [11] In addition, Google looks at factors like the public availability of the whoIs information of the domain owner, the use of global top-level domains, domain age and volatility of ownership to assess their apparent prestige. Lastly, search engines look at the traffic and the amount of organic searches for a site as the amount of traffic should be congruent to the level of prestige that a website has in a certain domain. [5]
Information quality describes the value which information provides to the reader. Wang and Strong categorize assessable dimensions of information into intrinsic (accuracy, objectivity, believability, reputation), contextual (relevancy, value-added/authenticity, timelessness, completeness, quantity), representational (interpretability, format, coherence, compatibility) and accessible (accessibility and access security). [12] Humans can base their judgments on quality based on experience in judging content, style and grammatical correctness. Information systems like search engines need indirect means, allowing concluding on the quality of information. In 2015, Google’s PageRank algorithm took approximately 200 ranking factors included in a learning algorithm to assess information quality. [13]
Prominent actors have extensive and ongoing relationships with other prominent actors. This increases their visibility and makes the content more relevant, interconnected, and useful. [9] Centrality, from a graph-theoretical perspective, describes unidirectional relationships without distinguishing between receiving and sending information. In this context, it includes the inbound links considered in the definition of 'prestige,' complemented by outgoing links. Another difference between prestige and centrality is that the measure of prestige applies to a complete website or author, whereas centrality can be considered at a more granular level, such as an individual blog post. Search engines evaluate various factors to assess the quality of outgoing links, including link centrality, which describes the quality, quantity, and relevance of outgoing links as well as the prestige of their destination. They also consider the frequency of new content publication ('freshness of information') to ensure that the website remains an active participant in the community. [5]
The domain authority that a website attains is not the only factor which defines its positioning in the SERPs of search engines. The second important factor is the competitiveness of a specific sector. Subjects like SEO are very competitive. A website needs to outperform the prestige of competing websites to attain domain authority. This prestige, relative to other websites, can be defined as “relative domain authority.”
In computing, a search engine is an information retrieval software system designed to help find information stored on one or more computer systems. Search engines discover, crawl, transform, and store information for retrieval and presentation in response to user queries. The search results are usually presented in a list and are commonly called hits. The most widely used type of search engine is a web search engine, which searches for information on the World Wide Web.
Spamdexing is the deliberate manipulation of search engine indexes. It involves a number of methods, such as link building and repeating related and/or unrelated phrases, to manipulate the relevance or prominence of resources indexed in a manner inconsistent with the purpose of the indexing system.
A web directory or link directory is an online list or catalog of websites. That is, it is a directory on the World Wide Web of the World Wide Web. Historically, directories typically listed entries on people or businesses, and their contact information; such directories are still in use today. A web directory includes entries about websites, including links to those websites, organized into categories and subcategories. Besides a link, each entry may include the title of the website, and a description of its contents. In most web directories, the entries are about whole websites, rather than individual pages within them. Websites are often limited to inclusion in only a few categories.
Search engine optimization (SEO) is the process of improving the quality and quantity of website traffic to a website or a web page from search engines. SEO targets unpaid traffic rather than direct traffic or paid traffic. Unpaid traffic may originate from different kinds of searches, including image search, video search, academic search, news search, and industry-specific vertical search engines.
On the World Wide Web, a link farm is any group of websites that all hyperlink to other sites in the group for the purpose of increasing SEO rankings. In graph theoretic terms, a link farm is a clique. Although some link farms can be created by hand, most are created through automated programs and services. A link farm is a form of spamming the index of a web search engine. Other link exchange systems are designed to allow individual websites to selectively exchange links with other relevant websites, and are not considered a form of spamdexing.
Relative to some web resource, a backlink is a link from some other website to that web resource. A web resource may be a website, web page, or web directory.
Findability is the ease with which information contained on a website can be found, both from outside the website and by users already on the website. Although findability has relevance outside the World Wide Web, the term is usually used in that context. Most relevant websites do not come up in the top results because designers and engineers do not cater to the way ranking algorithms work currently. Its importance can be determined from the first law of e-commerce, which states "If the user can’t find the product, the user can’t buy the product." As of December 2014, out of 10.3 billion monthly Google searches by Internet users in the United States, an estimated 78% are made to research products and services online.
Google Scholar is a freely accessible web search engine that indexes the full text or metadata of scholarly literature across an array of publishing formats and disciplines. Released in beta in November 2004, the Google Scholar index includes peer-reviewed online academic journals and books, conference papers, theses and dissertations, preprints, abstracts, technical reports, and other scholarly literature, including court opinions and patents.
Hyperlink-Induced Topic Search is a link analysis algorithm that rates Web pages, developed by Jon Kleinberg. The idea behind Hubs and Authorities stemmed from a particular insight into the creation of web pages when the Internet was originally forming; that is, certain web pages, known as hubs, served as large directories that were not actually authoritative in the information that they held, but were used as compilations of a broad catalog of information that led users direct to other authoritative pages. In other words, a good hub represents a page that pointed to many other pages, while a good authority represents a page that is linked by many different hubs.
Search engine marketing (SEM) is a form of Internet marketing that involves the promotion of websites by increasing their visibility in search engine results pages (SERPs) primarily through paid advertising. SEM may incorporate search engine optimization (SEO), which adjusts or rewrites website content and site architecture to achieve a higher ranking in search engine results pages to enhance pay per click (PPC) listings and increase the Call to action (CTA) on the website.
Local search is the use of specialized Internet search engines that allow users to submit geographically constrained searches against a structured database of local business listings. Typical local search queries include not only information about "what" the site visitor is searching for but also "where" information, such as a street address, city name, postal code, or geographic coordinates like latitude and longitude. Examples of local searches include "Hong Kong hotels", "Manhattan restaurants", and "Dublin car rental". Local searches exhibit explicit or implicit local intent. A search that includes a location modifier, such as "Bellevue, WA" or "14th arrondissement", is an explicit local search. A search that references a product or service that is typically consumed locally, such as "restaurant" or "nail salon", is an implicit local search.
The sandbox effect is a theory about the way Google ranks web pages in its index. It is the subject of much debate—its existence has been written about since 2004, but not confirmed, with several statements to the contrary.
A search engine results page (SERP) is a webpage that is displayed by a search engine in response to a query by a user. The main component of a SERP is the listing of results that are returned by the search engine in response to a keyword query.
Google Personalized Search is a personalized search feature of Google Search, introduced in 2004. All searches on Google Search are associated with a browser cookie record. When a user performs a search, the search results are not only based on the relevance of each web page to the search term, but also on which websites the user visited through previous search results. This provides a more personalized experience that can increase the relevance of the search results for the particular user. Such filtering may also have side effects, such as the creation of a filter bubble.
In the field of search engine optimization (SEO), link building describes actions aimed at increasing the number and quality of inbound links to a webpage with the goal of increasing the search engine rankings of that page or website. Briefly, link building is the process of establishing relevant hyperlinks to a website from external sites. Link building can increase the number of high-quality links pointing to a website, in turn increasing the likelihood of the website ranking highly in search engine results. Link building is also a proven marketing tactic for increasing brand awareness.
Search neutrality is a principle that search engines should have no editorial policies other than that their results be comprehensive, impartial and based solely on relevance. This means that when a user types in a search engine query, the engine should return the most relevant results found in the provider's domain, without manipulating the order of the results, excluding results, or in any other way manipulating the results to a certain bias.
PageRank (PR) is an algorithm used by Google Search to rank web pages in their search engine results. It is named after both the term "web page" and co-founder Larry Page. PageRank is a way of measuring the importance of website pages. According to Google:
PageRank works by counting the number and quality of links to a page to determine a rough estimate of how important the website is. The underlying assumption is that more important websites are likely to receive more links from other websites.
Competitor backlinking is a search engine optimization strategy that involves analyzing the backlinks of competing websites within a vertical search. The outcome of this activity is designed to increase organic search engine rankings and to gain an understanding of the link building strategies used by business competitors.
Website audit is a full analysis of all the factors that affect a website's visibility in search engines. This standard method gives a complete insight into any website, overall traffic, and individual pages. Website audit is completed solely for marketing purposes. The goal is to detect weak points in campaigns that affect web performance.
Local search engine optimization is similar to (national) SEO in that it is also a process affecting the visibility of a website or a web page in a web search engine's unpaid results often referred to as "natural", "organic", or "earned" results. In general, the higher ranked on the search results page and more frequently a site appears in the search results list, the more visitors it will receive from the search engine's users; these visitors can then be converted into customers. Local SEO, however, differs in that it is focused on optimizing a business's online presence so that its web pages will be displayed by search engines when users enter local searches for its products or services. Ranking for local search involves a similar process to general SEO but includes some specific elements to rank a business for local search.