The impact factor (IF) or journal impact factor (JIF) of an academic journal is a scientometric index that reflects the yearly average number of citations that articles published in the last two years in a given journal received. It is frequently used as a proxy for the relative importance of a journal within its field; journals with higher impact factors are often deemed to be more important than those with lower ones.
The impact factor was devised by Eugene Garfield, the founder of the Institute for Scientific Information (ISI). Impact factors are calculated yearly starting from 1975 for journals listed in the Journal Citation Reports (JCR). ISI was acquired by Thomson Scientific & Healthcare in 1992,and became known as Thomson ISI. In 2018, Thomson ISI was sold to Onex Corporation and Baring Private Equity Asia. They founded a new corporation, Clarivate, which is now the publisher of the JCR.
In any given year, the impact factor of a journal is the number of citations, received in that year, of articles published in that journal during the two preceding years, divided by the total number of "citable items" published in that journal during the two preceding years:
For example, Nature had an impact factor of 41.577 in 2017:
This means that, on average, its papers published in 2015 and 2016 received roughly 42 citations each in 2017. Note that 2017 impact factors are reported in 2018; they cannot be calculated until all of the 2017 publications have been processed by the indexing agency.
The value of impact factor depends on how to define "citations" and "publications"; the latter are often referred to as "citable items". In current practice, both "citations" and "publications" are defined exclusively by ISI as follows. "Publications" are items that are classed as "article", "review" or "proceedings paper"in the Web of Science (WoS) database; other items like editorials, corrections, notes, retractions and discussions are excluded. WoS is accessible to all registered users, who can independently verify the number of citable items for a given journal. In contrast, the number of citations is extracted not from the WoS database, but from a dedicated JCR database, which is not accessible to general readers. Hence, the commonly used "JCR Impact Factor" is a proprietary value, which is defined and calculated by ISI and can not be verified by external users.
New journals, which are indexed from their first published issue, will receive an impact factor after two years of indexing; in this case, the citations to the year prior to Volume 1, and the number of articles published in the year prior to Volume 1, are known zero values. Journals that are indexed starting with a volume other than the first volume will not get an impact factor until they have been indexed for three years. Occasionally, Journal Citation Reports assigns an impact factor to new journals with less than two years of indexing, based on partial citation data. The calculation always uses two complete and known years of item counts, but for new titles one of the known counts is zero. Annuals and other irregular publications sometimes publish no items in a particular year, affecting the count. The impact factor relates to a specific time period; it is possible to calculate it for any desired period. For example, the JCR also includes a five-year impact factor, which is calculated by dividing the number of citations to the journal in a given year by the number of articles published in that journal in the previous five years.
The impact factor is used to compare different journals within a certain field. The Web of Science indexes more than 11,500 science and social science journals.
Journal impact factors are often used to evaluate the merit of individual articles and individual researchers.This use of impact factors was summarised by Hoeffel:
Impact Factor is not a perfect tool to measure the quality of articles but there is nothing better and it has the advantage of already being in existence and is, therefore, a good technique for scientific evaluation. Experience has shown that in each specialty the best journals are those in which it is most difficult to have an article accepted, and these are the journals that have a high impact factor. Most of these journals existed long before the impact factor was devised. The use of impact factor as a measure of quality is widespread because it fits well with the opinion we have in each field of the best journals in our specialty....In conclusion, prestigious journals publish papers of high level. Therefore, their impact factor is high, and not the contrary.
As impact factors are a journal-level metric, rather than an article- or individual-level metric, this use is controversial. Garfield agrees with Hoeffel,but warns about the "misuse in evaluating individuals" because there is "a wide variation [of citations] from article to article within a single journal".
Numerous criticisms have been made regarding the use of impact factors.A 2007 study noted that the most fundamental flaw is that impact factors present the mean of data that is not normally distributed, and suggested that it would be more appropriate to present the median of these data. There is also a more general debate on the validity of the impact factor as a measure of journal importance and the effect of policies that editors may adopt to boost their impact factor (perhaps to the detriment of readers and writers). Other criticism focuses on the effect of the impact factor on behavior of scholars, editors and other stakeholders. Others have made more general criticisms, arguing that emphasis on impact factor results from negative influence of neoliberal policies on academia claiming that what is needed is not just replacement of the impact factor with more sophisticated metrics for science publications but also discussion on the social value of research assessment and the growing precariousness of scientific careers in higher education.
It has been stated that impact factors and citation analysis in general are affected by field-dependent factorswhich may invalidate comparisons not only across disciplines but even within different fields of research of one discipline. The percentage of total citations occurring in the first two years after publication also varies highly among disciplines from 1–3% in the mathematical and physical sciences to 5–8% in the biological sciences. Thus impact factors cannot be used to compare journals across disciplines.
Because citation counts have highly skewed distributions, 's 2004 impact factor was based on only a quarter of its publications, and thus the actual number of citations for a single article in the journal is in most cases much lower than the mean number of citations across articles. Furthermore, the strength of the relationship between impact factors of journals and the citation rates of the papers therein has been steadily decreasing since articles began to be available digitally.the mean number of citations is potentially misleading if used to gauge the typical impact of articles in the journal rather than the overall impact of the journal itself. For example, about 90% of Nature
Indeed, impact factors are sometimes used to evaluate not only the journals but the papers therein, thereby devaluing papers in certain subjects. A rose from 2.051 in 2008 to 49.926 in 2009, more than Nature (at 31.434) and Science (at 28.103). The second-most cited article in Acta Crystallographica Section A in 2008 only had 28 citations. Additionally, impact factor is a journal metric and should not be used to assess individual researchers or institutions.The Higher Education Funding Council for England was urged by the House of Commons Science and Technology Select Committee to remind Research Assessment Exercise panels that they are obliged to assess the quality of the content of individual articles, not the reputation of the journal in which they are published. The effect of outliers can be seen in the case of the article "A short history of SHELX", which included this sentence: "This paper could serve as a general literature citation when one or more of the open-source SHELX programs (and the Bruker AXS version SHELXTL) are employed in the course of a crystal-structure determination". This article received more than 6,600 citations. As a consequence, the impact factor of the journal Acta Crystallographica Section
Journal rankings constructed based solely on impact factors only moderately correlate with those compiled from the results of expert surveys.
A.E. Cawkell, former Director of Research at the Institute for Scientific Information remarked that the Science Citation Index (SCI), on which the impact factor is based, "would work perfectly if every author meticulously cited only the earlier work related to his theme; if it covered every scientific journal published anywhere in the world; and if it were free from economic constraints."
A journal can adopt editorial policies to increase its impact factor.For example, journals may publish a larger percentage of review articles which generally are cited more than research reports. Thus review articles can raise the impact factor of the journal and review journals will therefore often have the highest impact factors in their respective fields. Some journal editors set their submissions policy to "by invitation only" to invite exclusively senior scientists to publish "citable" papers to increase the journal impact factor.
Journals may also attempt to limit the number of "citable items"—i.e., the denominator of the impact factor equation—either by declining to publish articles that are unlikely to be cited (such as case reports in medical journals) or by altering articles (e.g., by not allowing an abstract or bibliography in hopes that Journal Citation Reports will not deem it a "citable item"). As a result of negotiations over whether items are "citable", impact factor variations of more than 300% have been observed.Items considered to be uncitable—and thus are not incorporated in impact factor calculations—can, if cited, still enter into the numerator part of the equation despite the ease with which such citations could be excluded. This effect is hard to evaluate, for the distinction between editorial comment and short original articles is not always obvious. For example, letters to the editor may refer to either class.
Another less insidious tactic journals employ is to publish a large portion of its papers, or at least the papers expected to be highly cited, early in the calendar year. This gives those papers more time to gather citations. Several methods, not necessarily with nefarious intent, exist for a journal to cite articles in the same journal which will increase the journal's impact factor.
Beyond editorial policies that may skew the impact factor, journals can take overt steps to game the system. For example, in 2007, the specialist journal Folia Phoniatrica et Logopaedica , with an impact factor of 0.66, published an editorial that cited all its articles from 2005 to 2006 in a protest against the "absurd scientific situation in some countries" related to use of the impact factor.The large number of citations meant that the impact factor for that journal increased to 1.44. As a result of the increase, the journal was not included in the 2008 and 2009 Journal Citation Reports.
Coercive citation is a practice in which an editor forces an author to add extraneous citations to an article before the journal will agree to publish it, in order to inflate the journal's impact factor. A survey published in 2012 indicates that coercive citation has been experienced by one in five researchers working in economics, sociology, psychology, and multiple business disciplines, and it is more common in business and in journals with a lower impact factor.However, cases of coercive citation have occasionally been reported for other disciplines.
The journal impact factor (JIF) was originally designed by Eugene Garfield as a metric to help librarians make decisions about which journals were worth subscribing to, as the JIF aggregates the number of citations to articles published in each journal. Since then, the JIF has become associated as a mark of journal "quality", and gained widespread use for evaluation of research and researchers instead, even at the institutional level. It thus has significant impact on steering research practices and behaviours.
However, critics of the JIF state that use of the arithmetic mean in its calculation is problematic because the pattern of citation distribution is skewed. Citation distributions for eight selected journals in,along with their JIFs and the percentage of citable items below the JIF shows that the distributions are clearly skewed, making the arithmetic mean an inappropriate statistic to use to say anything about individual papers within the citation distributions. More informative and readily available article-level metrics can be used instead, such as citation counts or "altmetrics', along with other qualitative and quantitative measures of research "impact'.
Already around 2010, national and international research funding institutions have pointed out that numerical indicators such as the JIF should not be referred to as a measure of quality.In fact, the JIF is a highly-manipulated metric, and the justification for its continued widespread use beyond its original narrow purpose seems due to its simplicity (easily calculable and comparable number), rather than any actual relationship to research quality.
Empirical evidence shows that the misuse of the JIF – and journal ranking metrics in general – has a number of negative consequences for the scholarly communication system. These include confusion between outreach of a journal and the quality of individual papers and insufficient coverage of social sciences and humanities as well as research outputs from across Latin America, Africa, and South-East Asia.Additional drawbacks include the marginalization of research in vernacular languages and on locally relevant topics, inducement to unethical authorship and citation practices as well as more generally fostering of a reputation economy in academia based on publishers" prestige rather than actual research qualities such as rigorous methods, replicability and social impact. Using journal prestige and the JIF to cultivate a competition regime in academia has been shown to have deleterious effects on research quality.
JIFs are still regularly used to evaluate research in many countries which is a problem since a number of outstanding issues remain around the opacity of the metric and the fact that it is often negotiated by publishers.However, these integrity problems appear to have done little to curb its widespread mis-use.
A number of regional focal points and initiatives are now providing and suggesting alternative research assessment systems, including key documents such as the Leiden Manifestoand the San Francisco Declaration on Research Assessment (DORA). Recent developments around 'Plan S' call on a broader adoption and implementation of such initiatives alongside fundamental changes in the scholarly communication system. Thus, there is little basis for the popular simplification which connects JIFs with any measure of quality, and the ongoing inappropriate association of the two will continue to have deleterious effects. As appropriate measures of quality for authors and research, concepts of research excellence should be remodelled around transparent workflows and accessible research results.
Because "the impact factor is not always a reliable instrument", in November 2007 the European Association of Science Editors (EASE) issued an official statement recommending "that journal impact factors are used only—and cautiously—for measuring and comparing the influence of entire journals, but not for the assessment of single papers, and certainly not for the assessment of researchers or research programmes".
In July 2008, the International Council for Science (ICSU) Committee on Freedom and Responsibility in the Conduct of Science (CFRS) issued a "statement on publication practices and indices and the role of peer review in research assessment", suggesting many possible solutions—e.g., considering a limit number of publications per year to be taken into consideration for each scientist, or even penalising scientists for an excessive number of publications per year—e.g., more than 20.
In February 2010, the Deutsche Forschungsgemeinschaft (German Research Foundation) published new guidelines to evaluate only articles and no bibliometric information on candidates to be evaluated in all decisions concerning "performance-based funding allocations, postdoctoral qualifications, appointments, or reviewing funding proposals, [where] increasing importance has been given to numerical indicators such as the h-index and the impact factor". [ citation needed ]This decision follows similar ones of the National Science Foundation (US) and the Research Assessment Exercise (UK).
In response to growing concerns over the inappropriate use of journal impact factors in evaluating scientific outputs and scientists themselves, the American Society for Cell Biology together with a group of editors and publishers of scholarly journals created the San Francisco Declaration on Research Assessment (DORA). Released in May 2013, DORA has garnered support from thousands of individuals and hundreds of institutions,including in March 2015 the League of European Research Universities (a consortium of 21 of the most renowned research universities in Europe), who have endorsed the document on the DORA website.
Université de Montréal, Imperial College London, PLOS, eLife, EMBO Journal, The Royal Society, Nature and Science proposed citation distributions metrics as alternative to impact factors.
Some related values, also calculated and published by the same organization, include:
As with the impact factor, there are some nuances to this: for example, ISI excludes certain article types (such as news items, correspondence, and errata) from the denominator.
Additional journal-level metrics are available from other organizations. For example, CiteScore : is a metric for serial titles in Scopus launched in December 2016 by Elsevier.While these metrics apply only to journals, there are also author-level metrics, such as the H-index, that apply to individual researchers. In addition, article-level metrics measure impact at an article level instead of journal level. Other more general alternative metrics, or "altmetrics", may include article views, downloads, or mentions in social media.
Fake impact factors are produced by some companies not affiliated with Journal Citation Reports.According to an article published in the United States National Library of Medicine, these include Global Impact Factor (GIF), Citefactor, and Universal Impact Factor (UIF). Jeffrey Beall maintained a list of such misleading metrics.
False impact factors are often used by predatory publishers.Consulting Journal Citation Reports' master journal list can confirm if a publication is indexed by Journal Citation Reports. The use of fake impact metrics is considered a "red flag".
An academic or scholarly journal is a periodical publication in which scholarship relating to a particular academic discipline is published. Academic journals serve as permanent and transparent forums for the presentation, scrutiny, and discussion of research. They are usually peer-reviewed or refereed. Content typically takes the form of articles presenting original research, review articles, and book reviews. The purpose of an academic journal, according to Henry Oldenburg, is to give researchers a venue to "impart their knowledge to one another, and contribute what they can to the Grand design of improving natural knowledge, and perfecting all Philosophical Arts, and Sciences."
Open access (OA) is a set of principles and a range of practices through which research outputs are distributed online, free of cost or other access barriers. With open access strictly defined, or libre open access, barriers to copying or reuse are also reduced or removed by applying an open license for copyright.
Scopus is Elsevier’s abstract and citation database launched in 2004. Scopus covers nearly 36,377 titles from approximately 11,678 publishers, of which 34,346 are peer-reviewed journals in top-level subject fields: life sciences, social sciences, physical sciences and health sciences. It covers three types of sources: book series, journals, and trade journals. All journals covered in the Scopus database, regardless of who they are published under, are reviewed each year to ensure high quality standards are maintained. Searches in Scopus also incorporate searches of patent databases. Scopus gives four types of quality measure for each title; those are h-Index, CiteScore, SJR and SNIP.
The Institute for Scientific Information (ISI) was an academic publishing service, founded by Eugene Garfield in Philadelphia in 1956. ISI offered scientometric and bibliographic database services. Its specialty was citation indexing and analysis, a field pioneered by Garfield.
Bibliometrics is the use of statistical methods to analyse books, articles and other publications. Bibliometric methods are frequently used in the field of library and information science. The sub-field of bibliometrics which concerns itself with the analysis of scientific publications is called scientometrics. Citation analysis is a commonly used bibliometric method which is based on constructing the citation graph, a network or graph representation of the citations between documents. Many research fields use bibliometric methods to explore the impact of their field, the impact of a set of researchers, the impact of a particular paper, or to identify particularly impactful papers within a specific field of research. Bibliometrics also has a wide range of other applications, such as in descriptive linguistics, the development of thesauri, and evaluation of reader usage. The Research HUB published a full-length freely accessible Bibliometric Analysis video tutorial playlist.
Scientometrics is the field of study which concerns itself with measuring and analysing scientific literature. Scientometrics is a sub-field of bibliometrics. Major research issues include the measurement of the impact of research papers and academic journals, the understanding of scientific citations, and the use of such measurements in policy and management contexts. In practice there is a significant overlap between scientometrics and other scientific fields such as information systems, information science, science of science policy, sociology of science, and metascience.
Citation analysis is the examination of the frequency, patterns, and graphs of citations in documents. It uses the directed graph of citations — links from one document to another document — to reveal properties of the documents. A typical aim would be to identify the most important documents in a collection. A classic example is that of the citations between academic articles and books. For another example, judges of law support their judgements by referring back to judgements made in earlier cases. An additional example is provided by patents which contain prior art, citation of earlier patents relevant to the current claim.
Eugene Eli Garfield was an American linguist and businessman, one of the founders of bibliometrics and scientometrics. He helped to create Current Contents, Science Citation Index (SCI), Journal Citation Reports, and Index Chemicus, among others, and founded the magazine The Scientist.
"Publish or perish" is an aphorism describing the pressure to publish academic work in order to succeed in an academic career.
Citation impact quantifies the citation usage of scholarly works. It is a result of citation analysis or bibliometrics. Among the measures that have emerged from citation analysis are the citation counts for an individual article, an author, and an academic journal.
The h-index is an author-level metric that attempts to measure both the productivity and citation impact of the publications of a scientist or scholar. The h-index correlates with obvious success indicators such as winning the Nobel Prize, being accepted for research fellowships and holding positions at top universities. The index is based on the set of the scientist's most cited papers and the number of citations that they have received in other publications. The index can also be applied to the productivity and impact of a scholarly journal as well as a group of scientists, such as a department or university or country. The index was suggested in 2005 by Jorge E. Hirsch, a physicist at UC San Diego, as a tool for determining theoretical physicists' relative quality and is sometimes called the Hirsch index or Hirsch number.
The Journal of Biological Chemistry is a weekly peer-reviewed scientific journal that was established in 1905. Since 1925, it is published by the American Society for Biochemistry and Molecular Biology. It covers research in areas of biochemistry and molecular biology. The editor-in-chief is Lila Gierasch. All its articles are available free after one year of publication. In press articles are available free on its website immediately after acceptance.
The Science Citation Index (SCI) is a citation index originally produced by the Institute for Scientific Information (ISI) and created by Eugene Garfield. It was officially launched in 1964. It is now owned by Clarivate Analytics. The larger version covers more than 8,500 notable and significant journals, across 150 disciplines, from 1900 to the present. These are alternatively described as the world's leading journals of science and technology, because of a rigorous selection process.
Journal Citation Reports (JCR) is an annual publication by Clarivate Analytics. It has been integrated with the Web of Science and is accessed from the Web of Science-Core Collections. It provides information about academic journals in the natural sciences and social sciences, including impact factors. The JCR was originally published as a part of Science Citation Index. Currently, the JCR, as a distinct service, is based on citations compiled from the Science Citation Index Expanded and the Social Sciences Citation Index.
Journal ranking is widely used in academic circles in the evaluation of an academic journal's impact and quality. Journal rankings are intended to reflect the place of a journal within its field, the relative difficulty of being published in that journal, and the prestige associated with it. They have been introduced as official research evaluation tools in several countries.
Web of Science is a website which provides subscription-based access to multiple databases that provide comprehensive citation data for many different academic disciplines. It was originally produced by the Institute for Scientific Information (ISI) and is currently maintained by Clarivate Analytics.
The Eigenfactor score, developed by Jevin West and Carl Bergstrom at the University of Washington, is a rating of the total importance of a scientific journal. Journals are rated according to the number of incoming citations, with citations from highly ranked journals weighted to make a larger contribution to the eigenfactor than those from poorly ranked journals. As a measure of importance, the Eigenfactor score scales with the total impact of a journal. All else equal, journals generating higher impact to the field have larger Eigenfactor scores.
In scholarly and scientific publishing, altmetrics are non-traditional bibliometrics proposed as an alternative or complement to more traditional citation impact metrics, such as impact factor and h-index. The term altmetrics was proposed in 2010, as a generalization of article level metrics, and has its roots in the #altmetrics hashtag. Although altmetrics are often thought of as metrics about articles, they can be applied to people, journals, books, data sets, presentations, videos, source code repositories, web pages, etc. Altmetrics use public APIs across platforms to gather data with open scripts and algorithms. Altmetrics did not originally cover citation counts, but calculate scholar impact based on diverse online research output, such as social media, online news media, online reference managers and so on. It demonstrates both the impact and the detailed composition of the impact. Altmetrics could be applied to research filter, promotion and tenure dossiers, grant applications and for ranking newly-published articles in academic search engines.
Author-level metrics are citation metrics that measure the bibliometric impact of individual authors, researchers, academics, and scholars. A prime example is the h-index, which was invented and suggested by Jorge E. Hirsch as a "useful yardstick with which to compare, in an unbiased way, different individuals competing for the same resource when an important evaluation criterion is scientific achievement."
CiteScore (CS) of an academic journal is a measure reflecting the yearly average number of citations to recent articles published in that journal. This journal evaluation metric was launched in December 2016 by Elsevier as an alternative to the generally used JCR impact factors (IFs). While CiteScore and JCR impact factor are similar in their definition, CiteScore is based on the citations recorded in the Scopus database rather than in JCR, and those citations are collected for articles published in the preceding three years instead of two or five.
a measure of the speed at which content in a particular journal is picked up and referred to
The Immediacy Index is the average number of times an article is cited in the year it is published. The journal Immediacy Index indicates how quickly articles in a journal are cited. The aggregate Immediacy Index indicates how quickly articles in a subject category are cited.