Domain authority

Last updated

The domain authority (also referred to as thought leadership) of a website describes its relevance for a specific subject area or industry. Domain Authority is a search engine ranking score developed by Moz. [1] This relevance has a direct impact on its ranking by search engines, trying to assess domain authority through automated analytic algorithms. The relevance of domain authority on website-listing in the Search Engine Results Page (SERPs) of search engines led to the birth of a whole industry of Black-Hat SEO providers, trying to feign an increased level of domain authority. [2] The ranking by major search engines, e.g., Google’s PageRank is agnostic of specific industry or subject areas and assesses a website in the context of the totality of websites on the Internet. [3] The results on the SERP page set the PageRank in the context of a specific keyword. In a less competitive subject area, even websites with a low PageRank can achieve high visibility in search engines, as the highest ranked sites that match specific search words are positioned on the first positions in the SERPs. [4]

Contents

Dimensions

Domain authority can be described through four dimensions:

  1. Prestige of a website and its authors
  2. Quality of the information presented
  3. Information and website centrality
  4. Competitive situation around a subject

The weight of these factors varies in function of the ranking body. When individuals judge domain authority, decisive factors can include the prestige of a website, the prestige of the contributing authors in a specific domain, the quality and relevance of the information on a website, the novelty of the content, but also the competitive situation around the discussed subject area or the quality of the outgoing links. [5] Several search engines (e.g., Bing, Google, Yahoo) have developed automated analyses and rank algorithms for domain authority. Lacking "human reasoning" which would allow to directly judge quality, they make use of complementary parameters such as information or website prestige and centrality from a graph-theoretical perspective, manifested in the quantity and quality of inbound links. [6] The software as a service company Moz.org has developed an algorithm and weighted level metric, branded as "Domain Authority", which gives predictions on a website's performance in search engine rankings with a discriminating range from 0 to 100. [7] [8]

Prestige of website and authors

Prestige identifies the prominent actors in a qualitative and quantitative manner on the basis of Graph theory. A website is considered a node. Its prestige is defined by the quantity of nodes that have directed edges pointing on the website and the quality of those nodes. The nodes’ quality is also defined through their prestige. This definition assures that a prestigious website is not only pointed at by many other websites, but that those pointing websites are prestigious themselves [9] Similar to the prestige of a website, the contributing authors’ prestige is taken into consideration, [10] in those cases, where the authors are named and identified (e.g., with their Twitter or Google Plus profile). In this case, prestige is measured with the prestige of the authors who quote them or refer to them and the quantity of referrals which these authors receive. [5] Search engines use additional factors to scrutinize the websites’ prestige. To do so, Google’s PageRank looks at factors like link-diversification and link-dynamics: When too many links are coming from the same domain or webmaster, there is a risk of black-hat SEO. When backlinks grow rapidly, this nourishes suspicion of spam or black-hat SEO as origin. [11] In addition, Google looks at factors like the public availability of the whoIs information of the domain owner, the use of global top-level domains, domain age and volatility of ownership to assess their apparent prestige. Lastly, search engines look at the traffic and the amount of organic searches for a site as the amount of traffic should be congruent to the level of prestige that a website has in a certain domain. [5]

Information quality

Information quality describes the value which information provides to the reader. Wang and Strong categorize assessable dimensions of information into intrinsic (accuracy, objectivity, believability, reputation), contextual (relevancy, value-added/authenticity, timelessness, completeness, quantity), representational (interpretability, format, coherence, compatibility) and accessible (accessibility and access security). [12] Humans can base their judgments on quality based on experience in judging content, style and grammatical correctness. Information systems like search engines need indirect means, allowing concluding on the quality of information. In 2015, Google’s PageRank algorithm took approximately 200 ranking factors included in a learning algorithm to assess information quality. [13]

Centrality of a website

Prominent actors have extensive and living relationships with other (prominent) actors. This makes them more visible and the content more relevant, interlinked and useful. [9] Centrality from a graph-theoretical perspective describes unidirectional relationships, not making a distinction between receiving and sending information. From this point of view, it includes the inbound links considered in the definition of “prestige” complemented with outgoing links. Another difference between prestige and centrality is that the measure of prestige counts for a complete website or an author, whereas centrality can be considered on a more granular level like one individual blog post. Search engines look at various factors to judge the quality of outgoing links, i.e., on link-centrality, describing the quality and quantity as well as the relevance of outgoing links and the prestige of its destination. They also look at the frequency of new content publication (“freshness of information”) to be sure that the website is still an active player in the community. [5]

Competitive situation around a subject

The domain authority that a website attains is not the only factor which defines its positioning in the SERPs of search engines. The second important factor is the competitiveness of a specific sector. Subjects like SEO are very competitive. A website needs to outperform the prestige of competing websites to attain domain authority. This prestige, relative to other websites, can be defined as “relative domain authority.” [14]

Related Research Articles

In computing, a search engine is an information retrieval software system designed to help find information stored on one or more computer systems. Search engines discover, crawl, transform, and store information for retrieval and presentation in response to user queries. The search results are usually presented in a list and are commonly called hits. The most widely used type of search engine is a web search engine, which searches for information on the World Wide Web.

Spamdexing is the deliberate manipulation of search engine indexes. It involves a number of methods, such as link building and repeating unrelated phrases, to manipulate the relevance or prominence of resources indexed in a manner inconsistent with the purpose of the indexing system.

Search engine optimization (SEO) is the process of improving the quality and quantity of website traffic to a website or a web page from search engines. SEO targets unpaid traffic rather than direct traffic or paid traffic. Unpaid traffic may originate from different kinds of searches, including image search, video search, academic search, news search, and industry-specific vertical search engines.

<span class="mw-page-title-main">Link farm</span> Group of websites that link to each other

On the World Wide Web, a link farm is any group of websites that all hyperlink to other sites in the group for the purpose of increasing SEO rankings. In graph theoretic terms, a link farm is a clique. Although some link farms can be created by hand, most are created through automated programs and services. A link farm is a form of spamming the index of a web search engine. Other link exchange systems are designed to allow individual websites to selectively exchange links with other relevant websites, and are not considered a form of spamdexing.

A backlink is a link from some other website to that web resource. A web resource may be a website, web page, or web directory.

Findability is the ease with which information contained on a website can be found, both from outside the website and by users already on the website. Although findability has relevance outside the World Wide Web, the term is usually used in that context. Most relevant websites do not come up in the top results because designers and engineers do not cater to the way ranking algorithms work currently. Its importance can be determined from the first law of e-commerce, which states "If the user can’t find the product, the user can’t buy the product." As of December 2014, out of 10.3 billion monthly Google searches by Internet users in the United States, an estimated 78% are made to research products and services online.

Hyperlink-Induced Topic Search is a link analysis algorithm that rates Web pages, developed by Jon Kleinberg. The idea behind Hubs and Authorities stemmed from a particular insight into the creation of web pages when the Internet was originally forming; that is, certain web pages, known as hubs, served as large directories that were not actually authoritative in the information that they held, but were used as compilations of a broad catalog of information that led users direct to other authoritative pages. In other words, a good hub represents a page that pointed to many other pages, while a good authority represents a page that is linked by many different hubs.

TrustRank is an algorithm that conducts link analysis to separate useful webpages from spam and helps search engine rank pages in SERPs. It is semi-automated process which means that it needs some human assistance in order to function properly. Search engines have many different algorithms and ranking factors that they use when measuring the quality of webpages. TrustRank is one of them.

Local search is the use of specialized Internet search engines that allow users to submit geographically constrained searches against a structured database of local business listings. Typical local search queries include not only information about "what" the site visitor is searching for but also "where" information, such as a street address, city name, postal code, or geographic coordinates like latitude and longitude. Examples of local searches include "Hong Kong hotels", "Manhattan restaurants", and "Dublin car rental". Local searches exhibit explicit or implicit local intent. A search that includes a location modifier, such as "Bellevue, WA" or "14th arrondissement", is an explicit local search. A search that references a product or service that is typically consumed locally, such as "restaurant" or "nail salon", is an implicit local search.

A scraper site is a website that copies content from other websites using web scraping. The content is then mirrored with the goal of creating revenue, usually through advertising and sometimes by selling user data.

The Sandbox effect is a theory about the way Google ranks web pages in its index. It is the subject of much debate—its existence has been written about since 2004, but not confirmed, with several statements to the contrary.

nofollow is a setting on a web page hyperlink that directs search engines not to use the link for page ranking calculations. It is specified in the page as a type of link relation; that is: <a rel="nofollow" ...>. Because search engines often calculate a site's importance according to the number of hyperlinks from other sites, the nofollow setting allows website authors to indicate that the presence of a link is not an endorsement of the target site's importance.

A search engine results page (SERP) is a webpage that is displayed by a search engine in response to a query by a user. The main component of a SERP is the listing of results that are returned by the search engine in response to a keyword query.

Google Personalized Search is a personalized search feature of Google Search, introduced in 2004. All searches on Google Search are associated with a browser cookie record. When a user performs a search, the search results are not only based on the relevance of each web page to the search term, but also on which websites the user visited through previous search results. This provides a more personalized experience that can increase the relevance of the search results for the particular user. Such filtering may also have side effects, such as the creation of a filter bubble.

In the field of search engine optimization (SEO), link building describes actions aimed at increasing the number and quality of inbound links to a webpage with the goal of increasing the search engine rankings of that page or website. Briefly, link building is the process of establishing relevant hyperlinks to a website from external sites. Link building can increase the number of high-quality links pointing to a website, in turn increasing the likelihood of the website ranking highly in search engine results. Link building is also a proven marketing tactic for increasing brand awareness.

Search neutrality is a principle that search engines should have no editorial policies other than that their results be comprehensive, impartial and based solely on relevance. This means that when a user types in a search engine query, the engine should return the most relevant results found in the provider's domain, without manipulating the order of the results, excluding results, or in any other way manipulating the results to a certain bias.

<span class="mw-page-title-main">PageRank</span> Algorithm used by Google Search to rank web pages

PageRank (PR) is an algorithm used by Google Search to rank web pages in their search engine results. It is named after both the term "web page" and co-founder Larry Page. PageRank is a way of measuring the importance of website pages. According to Google:

PageRank works by counting the number and quality of links to a page to determine a rough estimate of how important the website is. The underlying assumption is that more important websites are likely to receive more links from other websites.

<span class="mw-page-title-main">CheiRank</span>

The CheiRank is an eigenvector with a maximal real eigenvalue of the Google matrix constructed for a directed network with the inverted directions of links. It is similar to the PageRank vector, which ranks the network nodes in average proportionally to a number of incoming links being the maximal eigenvector of the Google matrix with a given initial direction of links. Due to inversion of link directions the CheiRank ranks the network nodes in average proportionally to a number of outgoing links. Since each node belongs both to CheiRank and PageRank vectors the ranking of information flow on a directed network becomes two-dimensional.

Website audit is a full analysis of all the factors that affect a website's visibility in search engines. This standard method gives a complete insight into any website, overall traffic, and individual pages. Website audit is completed solely for marketing purposes. The goal is to detect weak points in campaigns that affect web performance.

Local search engine optimization is similar to (national) SEO in that it is also a process affecting the visibility of a website or a web page in a web search engine's unpaid results often referred to as "natural", "organic", or "earned" results. In general, the higher ranked on the search results page and more frequently a site appears in the search results list, the more visitors it will receive from the search engine's users; these visitors can then be converted into customers. Local SEO, however, differs in that it is focused on optimizing a business's online presence so that its web pages will be displayed by search engines when users enter local searches for its products or services. Ranking for local search involves a similar process to general SEO but includes some specific elements to rank a business for local search.

References

  1. Chi, Clifford (6 February 2021). "What Is Domain Authority and How Can You Improve It?". blog.hubspot.com. Retrieved 2021-04-07.
  2. Ntoulas, Alexandros; Najor; Manasse, Mark; Fetterly, Dennis (May 23–26, 2006). "Detecting spam web pages through content analysis" (PDF). Proceedings of the 15th international conference on World Wide Web. Vol. WWW 2006. pp. 83–92. doi:10.1145/1135777.1135794. ISBN   1595933239. S2CID   9068476.
  3. Brin, Sergey; Page, Larry (January 29, 1998). "The PageRank Citation Ranking: Bringing Order to the Web" (PDF). Stanford University InfoLab Publication Server. Archived from the original (PDF) on January 26, 2009. Retrieved December 12, 2015.
  4. Luh, Cheng-Jye; Yang, Sheng-An; Huang, Ting-Li Dean (2016-01-01). "Estimating Google's search engine ranking function from a search engine optimization perspective". Online Information Review. 40 (2): 239–255. doi:10.1108/OIR-04-2015-0112. ISSN   1468-4527.
  5. 1 2 3 4 Scholten, Ulrich (Nov 29, 2015). "What Is Domain Authority and How Do I Build It?". VentureSkies.
  6. Keren, A. "Zagzebski on Authority and Preemption in the Domain of Belief". European Journal for Philosophy of Religion, 2014.
  7. Zilincan, Jakub; Kryvinska., Natalia (May 28, 2015). "Improving Rank of a Website in Search Results–Experimental Approach". International Conference at Brno University of Technology: Perspectives of Business and Entrepreneurship Development - System Engineering Track. 15.
  8. Orduna-Malea, Enrique; Aytac, Selenay (May 9, 2015). "Revealing the online network between university and industry: the case of Turkey". Scientometrics. 105 (3): 1849–1866. arXiv: 1506.03012 . doi:10.1007/s11192-015-1596-4. S2CID   255006902.
  9. 1 2 Wasserman, Stanley; Faust, Katherine (1994). Social Network Analysis: Methods and Applications (Structural Analysis in the Social Sciences). New York, USA: Cambridge University Press. ISBN   0-521-38707-8.
  10. Sengoren, Arif (22 April 2022). "How to measure domain authority and how to value it".
  11. Brożek, Anna (September 2013). "Brożek, A. Bocheński on authority. Stud East Eur Thought 65, 115–133 (2013)". Studies in East European Thought. 65 (1): 115–133. doi: 10.1007/s11212-013-9175-9 . S2CID   255070107.
  12. Wang, Richard Y.; Strong, Diane M. (October 26, 2013). "Beyond Accuracy: What Data Quality Means to Data Consumers". Journal of Management Information Systems. 12 (4): 5–33. doi:10.1080/07421222.1996.11518099. JSTOR   40398176. S2CID   205581875.
  13. Patel, Priyanka. "Google Page Rank Algorithm and It's Updates". Academia.edu.
  14. "What Are Dofollow Links?". 28 June 2022. Retrieved 28 June 2022.