Eigenfactor

Last updated

The Eigenfactor score, developed by Jevin West and Carl Bergstrom at the University of Washington, is a rating of the total importance of a scientific journal. [1] Journals are rated according to the number of incoming citations, with citations from highly ranked journals weighted to make a larger contribution to the eigenfactor than those from poorly ranked journals. [2] As a measure of importance, the Eigenfactor score scales with the total impact of a journal. All else equal, journals generating higher impact to the field have larger Eigenfactor scores. Citation metrics like eigenfactor or PageRank-based scores reduce the effect of self-referential groups. [3] [4]

Contents

Eigenfactor scores and Article Influence scores are calculated by eigenfactor.org, where they can be freely viewed. The Eigenfactor score is intended to measure the importance of a journal to the scientific community, by considering the origin of the incoming citations, and is thought to reflect how frequently an average researcher would access content from that journal. [2] However, the Eigenfactor score is influenced by the size of the journal, so that the score doubles when the journal doubles in size (measured as number of published articles per year). [5] The Article Influence score measures the average influence of articles in the journal, and is therefore comparable to the traditional impact factor.

The Eigenfactor approach is thought to be more robust than the impact factor metric, [6] which purely counts incoming citations without considering the significance of those citations. [7] While the Eigenfactor score is correlated with total citation count for medical journals, [8] these metrics provide significantly different information. For a given number of citations, citations from more significant journals will result in a higher Eigenfactor score. [9] Eigenfactor is similar to Eigenvector centrality and PageRank.

Originally Eigenfactor scores were measures of a journal's importance; it has been extended to author-level. [10] It can also be used in combination with the h-index to evaluate the work of individual scientists.

See also

Related Research Articles

<span class="mw-page-title-main">Scientific citation</span>

Scientific citation is providing detailed reference in a scientific publication, typically a paper or book, to previous published communications which have a bearing on the subject of the new publication. The purpose of citations in original work is to allow readers of the paper to refer to cited work to assist them in judging the new work, source background information vital for future development, and acknowledge the contributions of earlier workers. Citations in, say, a review paper bring together many sources, often recent, in one place.

Scopus is Elsevier's abstract and citation database launched in 2004. Scopus covers nearly 36,377 titles from approximately 11,678 publishers, of which 34,346 are peer-reviewed journals in top-level subject fields: life sciences, social sciences, physical sciences and health sciences. It covers three types of sources: book series, journals, and trade journals. All journals covered in the Scopus database are reviewed for sufficiently high quality each year according to four types of numerical quality measure for each title; those are h-Index, CiteScore, SJR and SNIP. Scopus also allows patent searches in a dedicated patent database Lexis-Nexis, albeit with a limited functionality.

The impact factor (IF) or journal impact factor (JIF) of an academic journal is a scientometric index calculated by Clarivate that reflects the yearly mean number of citations of articles published in the last two years in a given journal, as indexed by Clarivate's Web of Science.

<span class="mw-page-title-main">Bibliometrics</span> Statistical analysis of written publications

Bibliometrics is the use of statistical methods to analyse books, articles and other publications, especially in scientific contents. Bibliometric methods are frequently used in the field of library and information science. Bibliometrics is closely associated with scientometrics, the analysis of scientific metrics and indicators, to the point that both fields largely overlap.

Scientometrics is the field of study which concerns itself with measuring and analysing scholarly literature. Scientometrics is a sub-field of informetrics. Major research issues include the measurement of the impact of research papers and academic journals, the understanding of scientific citations, and the use of such measurements in policy and management contexts. In practice there is a significant overlap between scientometrics and other scientific fields such as information systems, information science, science of science policy, sociology of science, and metascience. Critics have argued that over-reliance on scientometrics has created a system of perverse incentives, producing a publish or perish environment that leads to low-quality research.

Citation analysis is the examination of the frequency, patterns, and graphs of citations in documents. It uses the directed graph of citations — links from one document to another document — to reveal properties of the documents. A typical aim would be to identify the most important documents in a collection. A classic example is that of the citations between academic articles and books. For another example, judges of law support their judgements by referring back to judgements made in earlier cases. An additional example is provided by patents which contain prior art, citation of earlier patents relevant to the current claim. The digitization of patent data and increasing computing power have led to a community of practice that uses these citation data to measure innovation attributes, trace knowledge flows, and map innovation networks.

Citation impact or citation rate is a measure of how many times an academic journal article or book or author is cited by other articles, books or authors. Citation counts are interpreted as measures of the impact or influence of academic work and have given rise to the field of bibliometrics or scientometrics, specializing in the study of patterns of academic impact through citation analysis. The importance of journals can be measured by the average citation rate, the ratio of number of citations to number articles published within a given time period and in a given index, such as the journal impact factor or the citescore. It is used by academic institutions in decisions about academic tenure, promotion and hiring, and hence also used by authors in deciding which journal to publish in. Citation-like measures are also used in other fields that do ranking, such as Google's PageRank algorithm, software metrics, college and university rankings, and business performance indicators.

The h-index is an author-level metric that measures both the productivity and citation impact of the publications, initially used for an individual scientist or scholar. The h-index correlates with success indicators such as winning the Nobel Prize, being accepted for research fellowships and holding positions at top universities. The index is based on the set of the scientist's most cited papers and the number of citations that they have received in other publications. The index has more recently been applied to the productivity and impact of a scholarly journal as well as a group of scientists, such as a department or university or country. The index was suggested in 2005 by Jorge E. Hirsch, a physicist at UC San Diego, as a tool for determining theoretical physicists' relative quality and is sometimes called the Hirsch index or Hirsch number.

<i>Journal of Biological Chemistry</i> Academic journal

The Journal of Biological Chemistry (JBC) is a weekly peer-reviewed scientific journal that was established in 1905. Since 1925, it is published by the American Society for Biochemistry and Molecular Biology. It covers research in areas of biochemistry and molecular biology. The editor is Alex Toker. As of January 2021, the journal is fully open access. In press articles are available free on its website immediately after acceptance.

Journal Citation Reports (JCR) is an annual publication by Clarivate. It has been integrated with the Web of Science and is accessed from the Web of Science Core Collection. It provides information about academic journals in the natural and social sciences, including impact factors. The JCR was originally published as a part of the Science Citation Index. Currently, the JCR, as a distinct service, is based on citations compiled from the Science Citation Index Expanded and the Social Sciences Citation Index. As of the 2023 edition, journals from the Arts and Humanities Citation Index and the Emerging Sources Citation Index will also be included.

Journal ranking is widely used in academic circles in the evaluation of an academic journal's impact and quality. Journal rankings are intended to reflect the place of a journal within its field, the relative difficulty of being published in that journal, and the prestige associated with it. They have been introduced as official research evaluation tools in several countries.

<span class="mw-page-title-main">Web of Science</span> Online subscription index of citations

The Web of Science is a paid-access platform that provides access to multiple databases that provide reference and citation data from academic journals, conference proceedings, and other documents in various academic disciplines. Until 1997, it was originally produced by the Institute for Scientific Information. It is currently owned by Clarivate.

<span class="mw-page-title-main">SCImago Journal Rank</span> Metric of scholarly journals

The SCImago Journal Rank (SJR) indicator is a measure of the prestige of scholarly journals that accounts for both the number of citations received by a journal and the prestige of the journals where the citations come from.

<span class="mw-page-title-main">PageRank</span> Algorithm used by Google Search to rank web pages

PageRank (PR) is an algorithm used by Google Search to rank web pages in their search engine results. It is named after both the term "web page" and co-founder Larry Page. PageRank is a way of measuring the importance of website pages. According to Google:

PageRank works by counting the number and quality of links to a page to determine a rough estimate of how important the website is. The underlying assumption is that more important websites are likely to receive more links from other websites.

<span class="mw-page-title-main">Carl Bergstrom</span> American theoretical and evolutionary biologist

Carl Theodore Bergstrom is a theoretical and evolutionary biologist and a professor at the University of Washington in Seattle, Washington, United States. Bergstrom is a critic of low-quality or misleading scientific research. He is the co-author of a book on misinformation called Calling Bullshit: The Art of Skepticism in a Data-Driven World and teaches a class by the same name at University of Washington.

<span class="mw-page-title-main">Altmetrics</span> Alternative metrics for analyzing scholarship

In scholarly and scientific publishing, altmetrics are non-traditional bibliometrics proposed as an alternative or complement to more traditional citation impact metrics, such as impact factor and h-index. The term altmetrics was proposed in 2010, as a generalization of article level metrics, and has its roots in the #altmetrics hashtag. Although altmetrics are often thought of as metrics about articles, they can be applied to people, journals, books, data sets, presentations, videos, source code repositories, web pages, etc.

In graph theory and network analysis, node influence metrics are measures that rank or quantify the influence of every node within a graph. They are related to centrality indices. Applications include measuring the influence of each person in a social network, understanding the role of infrastructure nodes in transportation networks, the Internet, or urban networks, and the participation of a given node in disease dynamics.

Author-level metrics are citation metrics that measure the bibliometric impact of individual authors, researchers, academics, and scholars. Many metrics have been developed that take into account varying numbers of factors.

There are a number of approaches to ranking academic publishing groups and publishers. Rankings rely on subjective impressions by the scholarly community, on analyses of prize winners of scientific associations, discipline, a publisher's reputation, and its impact factor.

The Leiden Manifesto for research metrics (LM) is a list of "ten principles to guide research evaluation", published as a comment in Volume 520, Issue 7548 of Nature, on 22 April 2015. It was formulated by public policy professor Diana Hicks, scientometrics professor Paul Wouters, and their colleagues at the 19th International Conference on Science and Technology Indicators, held between 3–5 September 2014 in Leiden, The Netherlands.

References

  1. Bergstrom, C. T.; West, J. D.; Wiseman, M. A. (2008). "The Eigenfactor Metrics". Journal of Neuroscience . 28 (45): 11433–11434. doi: 10.1523/JNEUROSCI.0003-08.2008 . PMC   6671297 . PMID   18987179.
  2. 1 2 Bergstrom, C. T. (2007). "Eigenfactor: Measuring the value and prestige of scholarly journals". College & Research Libraries News . 68 (5): 314–316. doi: 10.5860/crln.68.5.7804 .
  3. Ma, Nan, Jiancheng Guan, and Yi Zhao. "Bringing PageRank to the citation analysis." Information Processing & Management 44.2 (2008): 800-810.
  4. Sun, Y., Latora, V. The evolution of knowledge within and across fields in modern physics. Sci Rep 10, 12097 (2020).
  5. "Eigenfactor.org FAQ". 14 July 2015.
  6. Bollen, Johan; Van de Sompel, Herbert; Hagberg, Aric; Chute, Ryan (2009). "A principal component analysis of 39 scientific impact measures". PLOS ONE. 4 (6): e6022. arXiv: 0902.2183 . Bibcode:2009PLoSO...4.6022B. doi: 10.1371/journal.pone.0006022 . PMC   2699100 . PMID   19562078.
  7. Fersht, A. (Apr 2009). "The most influential journals: Impact Factor and Eigenfactor". Proceedings of the National Academy of Sciences of the United States of America . 106 (17): 6883–6884. Bibcode:2009PNAS..106.6883F. doi: 10.1073/pnas.0903307106 . ISSN   0027-8424. PMC   2678438 . PMID   19380731.
  8. Davis, P. M. (2008). "Eigenfactor: Does the principle of repeated improvement result in better estimates than raw citation counts?". Journal of the American Society for Information Science and Technology . 59 (13): 2186–2188. arXiv: 0807.2678 . doi:10.1002/asi.20943. S2CID   11358187.
  9. West, Jevin D.; Bergstrom, Theodore; Bergstrom, Carl T. (2010). "Big Macs and Eigenfactor Scores: Don't Let Correlation Coefficients Fool You". arXiv: 0911.1807v2 [cs.CY].
  10. West, Jevin D. (2013). "Author-level Eigenfactor metrics: Evaluating the influence of authors, institutions, and countries within the social science research network community". Journal of the American Society for Information Science and Technology. 64 (4): 787–801. doi:10.1002/asi.22790.