Impact factor

Last updated

The impact factor (IF) or journal impact factor (JIF) of an academic journal is a scientometric index that reflects the yearly average number of citations that articles published in the last two years in a given journal received. It is frequently used as a proxy for the relative importance of a journal within its field; journals with higher impact factors are often deemed to be more important than those with lower ones.

Contents

History

The impact factor was devised by Eugene Garfield, the founder of the Institute for Scientific Information (ISI). Impact factors are calculated yearly starting from 1975 for journals listed in the Journal Citation Reports (JCR). ISI was acquired by Thomson Scientific & Healthcare in 1992, [1] and became known as Thomson ISI. In 2018, Thomson ISI was sold to Onex Corporation and Baring Private Equity Asia. [2] They founded a new corporation, Clarivate, which is now the publisher of the JCR. [3]

Calculation

In any given year, the impact factor of a journal is the number of citations, received in that year, of articles published in that journal during the two preceding years, divided by the total number of "citable items" published in that journal during the two preceding years: [4]

For example, Nature had an impact factor of 41.577 in 2017: [5]

This means that, on average, its papers published in 2015 and 2016 received roughly 42 citations each in 2017. Note that 2017 impact factors are reported in 2018; they cannot be calculated until all of the 2017 publications have been processed by the indexing agency.

The value of impact factor depends on how to define "citations" and "publications"; the latter are often referred to as "citable items". In current practice, both "citations" and "publications" are defined exclusively by ISI as follows. "Publications" are items that are classed as "article", "review" or "proceedings paper" [6] in the Web of Science (WoS) database; other items like editorials, corrections, notes, retractions and discussions are excluded. WoS is accessible to all registered users, who can independently verify the number of citable items for a given journal. In contrast, the number of citations is extracted not from the WoS database, but from a dedicated JCR database, which is not accessible to general readers. Hence, the commonly used "JCR Impact Factor" is a proprietary value, which is defined and calculated by ISI and can not be verified by external users. [7]

New journals, which are indexed from their first published issue, will receive an impact factor after two years of indexing; in this case, the citations to the year prior to Volume 1, and the number of articles published in the year prior to Volume 1, are known zero values. Journals that are indexed starting with a volume other than the first volume will not get an impact factor until they have been indexed for three years. Occasionally, Journal Citation Reports assigns an impact factor to new journals with less than two years of indexing, based on partial citation data. [8] [9] The calculation always uses two complete and known years of item counts, but for new titles one of the known counts is zero. Annuals and other irregular publications sometimes publish no items in a particular year, affecting the count. The impact factor relates to a specific time period; it is possible to calculate it for any desired period. For example, the JCR also includes a five-year impact factor, which is calculated by dividing the number of citations to the journal in a given year by the number of articles published in that journal in the previous five years. [10] [11]

Use

The impact factor is used to compare different journals within a certain field. The Web of Science indexes more than 11,500 science and social science journals. [12]

Journal impact factors are often used to evaluate the merit of individual articles and individual researchers. [13] This use of impact factors was summarised by Hoeffel: [14]

Impact Factor is not a perfect tool to measure the quality of articles but there is nothing better and it has the advantage of already being in existence and is, therefore, a good technique for scientific evaluation. Experience has shown that in each specialty the best journals are those in which it is most difficult to have an article accepted, and these are the journals that have a high impact factor. Most of these journals existed long before the impact factor was devised. The use of impact factor as a measure of quality is widespread because it fits well with the opinion we have in each field of the best journals in our specialty....In conclusion, prestigious journals publish papers of high level. Therefore, their impact factor is high, and not the contrary.

As impact factors are a journal-level metric, rather than an article- or individual-level metric, this use is controversial. Garfield agrees with Hoeffel, [15] but warns about the "misuse in evaluating individuals" because there is "a wide variation [of citations] from article to article within a single journal". [16]

Criticisms

Numerous criticisms have been made regarding the use of impact factors. [17] [18] [19] A 2007 study noted that the most fundamental flaw is that impact factors present the mean of data that is not normally distributed, and suggested that it would be more appropriate to present the median of these data. [20] There is also a more general debate on the validity of the impact factor as a measure of journal importance and the effect of policies that editors may adopt to boost their impact factor (perhaps to the detriment of readers and writers). Other criticism focuses on the effect of the impact factor on behavior of scholars, editors and other stakeholders. [21] [22] Others have made more general criticisms, arguing that emphasis on impact factor results from negative influence of neoliberal policies on academia claiming that what is needed is not just replacement of the impact factor with more sophisticated metrics for science publications but also discussion on the social value of research assessment and the growing precariousness of scientific careers in higher education. [23] [24]

Validity as a measure of importance

It has been stated that impact factors and citation analysis in general are affected by field-dependent factors [25] which may invalidate comparisons not only across disciplines but even within different fields of research of one discipline. [26] The percentage of total citations occurring in the first two years after publication also varies highly among disciplines from 1–3% in the mathematical and physical sciences to 5–8% in the biological sciences. [27] Thus impact factors cannot be used to compare journals across disciplines.

Because citation counts have highly skewed distributions, [28] the mean number of citations is potentially misleading if used to gauge the typical impact of articles in the journal rather than the overall impact of the journal itself. [29] For example, about 90% of Nature 's 2004 impact factor was based on only a quarter of its publications, and thus the actual number of citations for a single article in the journal is in most cases much lower than the mean number of citations across articles. [30] Furthermore, the strength of the relationship between impact factors of journals and the citation rates of the papers therein has been steadily decreasing since articles began to be available digitally. [31]

Indeed, impact factors are sometimes used to evaluate not only the journals but the papers therein, thereby devaluing papers in certain subjects. [32] The Higher Education Funding Council for England was urged by the House of Commons Science and Technology Select Committee to remind Research Assessment Exercise panels that they are obliged to assess the quality of the content of individual articles, not the reputation of the journal in which they are published. [33] The effect of outliers can be seen in the case of the article "A short history of SHELX", which included this sentence: "This paper could serve as a general literature citation when one or more of the open-source SHELX programs (and the Bruker AXS version SHELXTL) are employed in the course of a crystal-structure determination". This article received more than 6,600 citations. As a consequence, the impact factor of the journal Acta Crystallographica Section A rose from 2.051 in 2008 to 49.926 in 2009, more than Nature (at 31.434) and Science (at 28.103). [34] The second-most cited article in Acta Crystallographica Section A in 2008 only had 28 citations. [35] Additionally, impact factor is a journal metric and should not be used to assess individual researchers or institutions. [36] [37]

Journal rankings constructed based solely on impact factors only moderately correlate with those compiled from the results of expert surveys. [38]

A.E. Cawkell, former Director of Research at the Institute for Scientific Information remarked that the Science Citation Index (SCI), on which the impact factor is based, "would work perfectly if every author meticulously cited only the earlier work related to his theme; if it covered every scientific journal published anywhere in the world; and if it were free from economic constraints." [39]

Editorial policies that affect the impact factor

A journal can adopt editorial policies to increase its impact factor. [40] [41] For example, journals may publish a larger percentage of review articles which generally are cited more than research reports. [4] Thus review articles can raise the impact factor of the journal and review journals will therefore often have the highest impact factors in their respective fields. [22] Some journal editors set their submissions policy to "by invitation only" to invite exclusively senior scientists to publish "citable" papers to increase the journal impact factor. [22]

Journals may also attempt to limit the number of "citable items"—i.e., the denominator of the impact factor equation—either by declining to publish articles that are unlikely to be cited (such as case reports in medical journals) or by altering articles (e.g., by not allowing an abstract or bibliography in hopes that Journal Citation Reports will not deem it a "citable item"). As a result of negotiations over whether items are "citable", impact factor variations of more than 300% have been observed. [42] Items considered to be uncitable—and thus are not incorporated in impact factor calculations—can, if cited, still enter into the numerator part of the equation despite the ease with which such citations could be excluded. This effect is hard to evaluate, for the distinction between editorial comment and short original articles is not always obvious. For example, letters to the editor may refer to either class.

Another less insidious tactic journals employ is to publish a large portion of its papers, or at least the papers expected to be highly cited, early in the calendar year. This gives those papers more time to gather citations. Several methods, not necessarily with nefarious intent, exist for a journal to cite articles in the same journal which will increase the journal's impact factor. [43] [44]

Beyond editorial policies that may skew the impact factor, journals can take overt steps to game the system. For example, in 2007, the specialist journal Folia Phoniatrica et Logopaedica , with an impact factor of 0.66, published an editorial that cited all its articles from 2005 to 2006 in a protest against the "absurd scientific situation in some countries" related to use of the impact factor. [45] The large number of citations meant that the impact factor for that journal increased to 1.44. As a result of the increase, the journal was not included in the 2008 and 2009 Journal Citation Reports. [46]

Coercive citation is a practice in which an editor forces an author to add extraneous citations to an article before the journal will agree to publish it, in order to inflate the journal's impact factor. A survey published in 2012 indicates that coercive citation has been experienced by one in five researchers working in economics, sociology, psychology, and multiple business disciplines, and it is more common in business and in journals with a lower impact factor. [47] However, cases of coercive citation have occasionally been reported for other disciplines. [48]

Correlation between impact factor and quality

The journal impact factor (JIF) was originally designed by Eugene Garfield as a metric to help librarians make decisions about which journals were worth subscribing to, as the JIF aggregates the number of citations to articles published in each journal. Since then, the JIF has become associated as a mark of journal "quality", and gained widespread use for evaluation of research and researchers instead, even at the institutional level. It thus has significant impact on steering research practices and behaviours. [49] [50]

However, critics of the JIF state that use of the arithmetic mean in its calculation is problematic because the pattern of citation distribution is skewed. Citation distributions for eight selected journals in, [51] along with their JIFs and the percentage of citable items below the JIF shows that the distributions are clearly skewed, making the arithmetic mean an inappropriate statistic to use to say anything about individual papers within the citation distributions. More informative and readily available article-level metrics can be used instead, such as citation counts or "altmetrics', along with other qualitative and quantitative measures of research "impact'. [52] [53]

Already around 2010, national and international research funding institutions have pointed out that numerical indicators such as the JIF should not be referred to as a measure of quality. [note 1] In fact, the JIF is a highly-manipulated metric, [54] [55] [56] and the justification for its continued widespread use beyond its original narrow purpose seems due to its simplicity (easily calculable and comparable number), rather than any actual relationship to research quality. [57] [58] [59]

Empirical evidence shows that the misuse of the JIF – and journal ranking metrics in general – has a number of negative consequences for the scholarly communication system. These include confusion between outreach of a journal and the quality of individual papers and insufficient coverage of social sciences and humanities as well as research outputs from across Latin America, Africa, and South-East Asia. [60] Additional drawbacks include the marginalization of research in vernacular languages and on locally relevant topics, inducement to unethical authorship and citation practices as well as more generally fostering of a reputation economy in academia based on publishers" prestige rather than actual research qualities such as rigorous methods, replicability and social impact. Using journal prestige and the JIF to cultivate a competition regime in academia has been shown to have deleterious effects on research quality. [61]

JIFs are still regularly used to evaluate research in many countries which is a problem since a number of outstanding issues remain around the opacity of the metric and the fact that it is often negotiated by publishers. [62] [63] [64] However, these integrity problems appear to have done little to curb its widespread mis-use.

A number of regional focal points and initiatives are now providing and suggesting alternative research assessment systems, including key documents such as the Leiden Manifesto [note 2] and the San Francisco Declaration on Research Assessment (DORA). Recent developments around 'Plan S' call on a broader adoption and implementation of such initiatives alongside fundamental changes in the scholarly communication system. [note 3] Thus, there is little basis for the popular simplification which connects JIFs with any measure of quality, and the ongoing inappropriate association of the two will continue to have deleterious effects. As appropriate measures of quality for authors and research, concepts of research excellence should be remodelled around transparent workflows and accessible research results. [65] [66] [52]

Responses

Because "the impact factor is not always a reliable instrument", in November 2007 the European Association of Science Editors (EASE) issued an official statement recommending "that journal impact factors are used only—and cautiously—for measuring and comparing the influence of entire journals, but not for the assessment of single papers, and certainly not for the assessment of researchers or research programmes". [18]

In July 2008, the International Council for Science (ICSU) Committee on Freedom and Responsibility in the Conduct of Science (CFRS) issued a "statement on publication practices and indices and the role of peer review in research assessment", suggesting many possible solutions—e.g., considering a limit number of publications per year to be taken into consideration for each scientist, or even penalising scientists for an excessive number of publications per year—e.g., more than 20. [67]

In February 2010, the Deutsche Forschungsgemeinschaft (German Research Foundation) published new guidelines to evaluate only articles and no bibliometric information on candidates to be evaluated in all decisions concerning "performance-based funding allocations, postdoctoral qualifications, appointments, or reviewing funding proposals, [where] increasing importance has been given to numerical indicators such as the h-index and the impact factor". [68] This decision follows similar ones of the National Science Foundation (US) and the Research Assessment Exercise (UK).[ citation needed ]

In response to growing concerns over the inappropriate use of journal impact factors in evaluating scientific outputs and scientists themselves, the American Society for Cell Biology together with a group of editors and publishers of scholarly journals created the San Francisco Declaration on Research Assessment (DORA). Released in May 2013, DORA has garnered support from thousands of individuals and hundreds of institutions, [24] including in March 2015 the League of European Research Universities (a consortium of 21 of the most renowned research universities in Europe), [69] who have endorsed the document on the DORA website.

Université de Montréal, Imperial College London, PLOS, eLife, EMBO Journal, The Royal Society, Nature and Science proposed citation distributions metrics as alternative to impact factors. [70] [71] [72]

Some related values, also calculated and published by the same organization, include:

As with the impact factor, there are some nuances to this: for example, ISI excludes certain article types (such as news items, correspondence, and errata) from the denominator. [74] [75] [76]

Other measures of impact

Additional journal-level metrics are available from other organizations. For example, CiteScore : is a metric for serial titles in Scopus launched in December 2016 by Elsevier. [77] [78] While these metrics apply only to journals, there are also author-level metrics, such as the H-index, that apply to individual researchers. In addition, article-level metrics measure impact at an article level instead of journal level. Other more general alternative metrics, or "altmetrics", may include article views, downloads, or mentions in social media.

Counterfeit

Fake impact factors are produced by some companies not affiliated with Journal Citation Reports. [79] According to an article published in the United States National Library of Medicine, these include Global Impact Factor (GIF), Citefactor, and Universal Impact Factor (UIF). [80] Jeffrey Beall maintained a list of such misleading metrics. [81] [82]

False impact factors are often used by predatory publishers. [83] Consulting Journal Citation Reports' master journal list can confirm if a publication is indexed by Journal Citation Reports. [84] The use of fake impact metrics is considered a "red flag". [85]

See also

Notes

  1. ""Quality not Quantity" – DFG Adopts Rules to Counter the Flood of Publications in Research".. DFG Press Release No. 7 (2010)
  2. "The Leiden Manifesto for Research Metrics". 2015.
  3. "Plan S implementation guidelines"., February 2019.

Related Research Articles

Academic journal peer-reviewed periodical relating to a particular academic discipline

An academic or scholarly journal is a periodical publication in which scholarship relating to a particular academic discipline is published. Academic journals serve as permanent and transparent forums for the presentation, scrutiny, and discussion of research. They are usually peer-reviewed or refereed. Content typically takes the form of articles presenting original research, review articles, and book reviews. The purpose of an academic journal, according to Henry Oldenburg, is to give researchers a venue to "impart their knowledge to one another, and contribute what they can to the Grand design of improving natural knowledge, and perfecting all Philosophical Arts, and Sciences."

Open access Research publications that are distributed online, free of cost or other barriers

Open access (OA) is a set of principles and a range of practices through which research outputs are distributed online, free of cost or other access barriers. With open access strictly defined, or libre open access, barriers to copying or reuse are also reduced or removed by applying an open license for copyright.

Scopus is Elsevier’s abstract and citation database launched in 2004. Scopus covers nearly 36,377 titles from approximately 11,678 publishers, of which 34,346 are peer-reviewed journals in top-level subject fields: life sciences, social sciences, physical sciences and health sciences. It covers three types of sources: book series, journals, and trade journals. All journals covered in the Scopus database, regardless of who they are published under, are reviewed each year to ensure high quality standards are maintained. Searches in Scopus also incorporate searches of patent databases. Scopus gives four types of quality measure for each title; those are h-Index, CiteScore, SJR and SNIP.

The Institute for Scientific Information (ISI) was an academic publishing service, founded by Eugene Garfield in Philadelphia in 1956. ISI offered scientometric and bibliographic database services. Its specialty was citation indexing and analysis, a field pioneered by Garfield.

Bibliometrics is the use of statistical methods to analyse books, articles and other publications. Bibliometric methods are frequently used in the field of library and information science. The sub-field of bibliometrics which concerns itself with the analysis of scientific publications is called scientometrics. Citation analysis is a commonly used bibliometric method which is based on constructing the citation graph, a network or graph representation of the citations between documents. Many research fields use bibliometric methods to explore the impact of their field, the impact of a set of researchers, the impact of a particular paper, or to identify particularly impactful papers within a specific field of research. Bibliometrics also has a wide range of other applications, such as in descriptive linguistics, the development of thesauri, and evaluation of reader usage. The Research HUB published a full-length freely accessible Bibliometric Analysis video tutorial playlist.

Scientometrics is the field of study which concerns itself with measuring and analysing scientific literature. Scientometrics is a sub-field of bibliometrics. Major research issues include the measurement of the impact of research papers and academic journals, the understanding of scientific citations, and the use of such measurements in policy and management contexts. In practice there is a significant overlap between scientometrics and other scientific fields such as information systems, information science, science of science policy, sociology of science, and metascience.

Citation analysis is the examination of the frequency, patterns, and graphs of citations in documents. It uses the directed graph of citations — links from one document to another document — to reveal properties of the documents. A typical aim would be to identify the most important documents in a collection. A classic example is that of the citations between academic articles and books. For another example, judges of law support their judgements by referring back to judgements made in earlier cases. An additional example is provided by patents which contain prior art, citation of earlier patents relevant to the current claim.

Eugene Garfield American scientist

Eugene Eli Garfield was an American linguist and businessman, one of the founders of bibliometrics and scientometrics. He helped to create Current Contents, Science Citation Index (SCI), Journal Citation Reports, and Index Chemicus, among others, and founded the magazine The Scientist.

"Publish or perish" is an aphorism describing the pressure to publish academic work in order to succeed in an academic career.

Citation impact quantifies the citation usage of scholarly works. It is a result of citation analysis or bibliometrics. Among the measures that have emerged from citation analysis are the citation counts for an individual article, an author, and an academic journal.

The h-index is an author-level metric that attempts to measure both the productivity and citation impact of the publications of a scientist or scholar. The h-index correlates with obvious success indicators such as winning the Nobel Prize, being accepted for research fellowships and holding positions at top universities. The index is based on the set of the scientist's most cited papers and the number of citations that they have received in other publications. The index can also be applied to the productivity and impact of a scholarly journal as well as a group of scientists, such as a department or university or country. The index was suggested in 2005 by Jorge E. Hirsch, a physicist at UC San Diego, as a tool for determining theoretical physicists' relative quality and is sometimes called the Hirsch index or Hirsch number.

The Journal of Biological Chemistry is a weekly peer-reviewed scientific journal that was established in 1905. Since 1925, it is published by the American Society for Biochemistry and Molecular Biology. It covers research in areas of biochemistry and molecular biology. The editor-in-chief is Lila Gierasch. All its articles are available free after one year of publication. In press articles are available free on its website immediately after acceptance.

The Science Citation Index (SCI) is a citation index originally produced by the Institute for Scientific Information (ISI) and created by Eugene Garfield. It was officially launched in 1964. It is now owned by Clarivate Analytics. The larger version covers more than 8,500 notable and significant journals, across 150 disciplines, from 1900 to the present. These are alternatively described as the world's leading journals of science and technology, because of a rigorous selection process.

Journal Citation Reports (JCR) is an annual publication by Clarivate Analytics. It has been integrated with the Web of Science and is accessed from the Web of Science-Core Collections. It provides information about academic journals in the natural sciences and social sciences, including impact factors. The JCR was originally published as a part of Science Citation Index. Currently, the JCR, as a distinct service, is based on citations compiled from the Science Citation Index Expanded and the Social Sciences Citation Index.

Journal ranking is widely used in academic circles in the evaluation of an academic journal's impact and quality. Journal rankings are intended to reflect the place of a journal within its field, the relative difficulty of being published in that journal, and the prestige associated with it. They have been introduced as official research evaluation tools in several countries.

Web of Science Online subscription index of citations

Web of Science is a website which provides subscription-based access to multiple databases that provide comprehensive citation data for many different academic disciplines. It was originally produced by the Institute for Scientific Information (ISI) and is currently maintained by Clarivate Analytics.

The Eigenfactor score, developed by Jevin West and Carl Bergstrom at the University of Washington, is a rating of the total importance of a scientific journal. Journals are rated according to the number of incoming citations, with citations from highly ranked journals weighted to make a larger contribution to the eigenfactor than those from poorly ranked journals. As a measure of importance, the Eigenfactor score scales with the total impact of a journal. All else equal, journals generating higher impact to the field have larger Eigenfactor scores.

Altmetrics study of alternative metrics for analyzing and informing scholarship

In scholarly and scientific publishing, altmetrics are non-traditional bibliometrics proposed as an alternative or complement to more traditional citation impact metrics, such as impact factor and h-index. The term altmetrics was proposed in 2010, as a generalization of article level metrics, and has its roots in the #altmetrics hashtag. Although altmetrics are often thought of as metrics about articles, they can be applied to people, journals, books, data sets, presentations, videos, source code repositories, web pages, etc. Altmetrics use public APIs across platforms to gather data with open scripts and algorithms. Altmetrics did not originally cover citation counts, but calculate scholar impact based on diverse online research output, such as social media, online news media, online reference managers and so on. It demonstrates both the impact and the detailed composition of the impact. Altmetrics could be applied to research filter, promotion and tenure dossiers, grant applications and for ranking newly-published articles in academic search engines.

Author-level metrics are citation metrics that measure the bibliometric impact of individual authors, researchers, academics, and scholars. A prime example is the h-index, which was invented and suggested by Jorge E. Hirsch as a "useful yardstick with which to compare, in an unbiased way, different individuals competing for the same resource when an important evaluation criterion is scientific achievement."

CiteScore (CS) of an academic journal is a measure reflecting the yearly average number of citations to recent articles published in that journal. This journal evaluation metric was launched in December 2016 by Elsevier as an alternative to the generally used JCR impact factors (IFs). While CiteScore and JCR impact factor are similar in their definition, CiteScore is based on the citations recorded in the Scopus database rather than in JCR, and those citations are collected for articles published in the preceding three years instead of two or five.

References

  1. "Thomson Corporation acquired ISI". Online . July 1992. Archived from the original on 2013-05-15. Retrieved 2012-02-26.
  2. "Acquisition of the Thomson Reuters Intellectual Property and Science Business by Onex and Baring Asia Completed".
  3. "Journal Citation Reports". Web of Science Group. Retrieved 2019-09-14.
  4. 1 2 Garfield, Eugene (20 June 1994). "The Thomson Reuters Impact Factor". Thomson Reuters.Cite journal requires |journal= (help)
  5. "Nature". 2017 Journal Citation Reports. Web of Science (Science ed.). Thomson Reuters. 2018.
  6. McVeigh, M. E.; Mann, S. J. (2009). "The Journal Impact Factor Denominator". JAMA . 302 (10): 1107–9. doi: 10.1001/jama.2009.1301 . PMID   19738096.
  7. Hubbard, S. C.; McVeigh, M. E. (2011). "Casting a wide net: The Journal Impact Factor numerator". Learned Publishing. 24 (2): 133–137. doi:10.1087/20110208.
  8. "RSC Advances receives its first partial impact factor". RSC Advances Blog. 24 June 2013. Retrieved 16 July 2018.
  9. "Our first (partial) impact factor and our continuing (full) story". news.cell.com. 30 July 2014. Archived from the original on 7 March 2016. Retrieved 21 May 2015.
  10. "JCR with Eigenfactor". Archived from the original on 2010-01-02. Retrieved 2009-08-26.
  11. "ISI 5-Year Impact Factor". APA. Retrieved 2017-11-12.
  12. "Every journal has a story to tell". Journal Citation Reports. Clarivate Analytics. Retrieved 2019-03-15.
  13. McKiernan, Erin C; Schimanski, Lesley A; Muñoz Nieves, Carol; Matthias, Lisa; Niles, Meredith T; Alperin, Juan P (31 July 2019). "Use of the Journal Impact Factor in academic review, promotion, and tenure evaluations". eLife. 8. doi:10.7554/eLife.47338. PMC   6668985 . PMID   31364991.
  14. Hoeffel, C. (1998). "Journal impact factors". Allergy. 53 (12): 1225. doi:10.1111/j.1398-9995.1998.tb03848.x. PMID   9930604.
  15. Garfield, Eugene (2006). "The History and Meaning of the Journal Impact Factor" (PDF). JAMA. 295 (1): 90–3. Bibcode:2006JAMA..295...90G. doi:10.1001/jama.295.1.90. PMID   16391221.
  16. Garfield, Eugene (June 1998). "The Impact Factor and Using It Correctly". Der Unfallchirurg. 101 (6): 413–414. PMID   9677838.
  17. "Time to remodel the journal impact factor". Nature . 535 (466): 466. 2016. Bibcode:2016Natur.535..466.. doi: 10.1038/535466a . PMID   27466089.
  18. 1 2 "European Association of Science Editors (EASE) Statement on Inappropriate Use of Impact Factors" . Retrieved 2012-07-23.
  19. Callaway, Ewen (2016). "Beat it, impact factor! Publishing elite turns against controversial metric". Nature. 535 (7611): 210–211. Bibcode:2016Natur.535..210C. doi: 10.1038/nature.2016.20224 . PMID   27411614.
  20. Rossner, M.; Van Epps, H.; Hill, E. (17 December 2007). "Show me the data". Journal of Cell Biology. 179 (6): 1091–2. doi:10.1083/jcb.200711140. PMC   2140038 . PMID   18086910.
  21. Wesel, M. van (2016). "Evaluation by Citation: Trends in Publication Behavior, Evaluation Criteria, and the Strive for High Impact Publications". Science and Engineering Ethics. 22 (1): 199–225. doi:10.1007/s11948-015-9638-0. PMC   4750571 . PMID   25742806.
  22. 1 2 3 Moustafa, Khaled (2015). "The disaster of the impact factor". Science and Engineering Ethics. 21 (1): 139–142. doi:10.1007/s11948-014-9517-0. PMID   24469472.
  23. Kansa, Eric (27 January 2014). "It's the Neoliberalism, Stupid: Why instrumentalist arguments for Open Access, Open Data, and Open Science are not enough". LSE Impact Blog. Retrieved 16 July 2018.
  24. 1 2 Cabello, F.; Rascón, M.T. (2015). "The Index and the Moon. Mortgaging Scientific Evaluation". International Journal of Communication. 9: 1880–1887.
  25. Bornmann, L.; Daniel, H. D. (2008). "What do citation counts measure? A review of studies on citing behavior". Journal of Documentation. 64 (1): 45–80. doi:10.1108/00220410810844150.
  26. Anauati, Maria Victoria; Galiani, Sebastian; Gálvez, Ramiro H. (November 11, 2014). "Quantifying the Life Cycle of Scholarly Articles Across Fields of Economic Research". SSRN   2523078 .Cite journal requires |journal= (help)
  27. van Nierop, Erjen (2009). "Why Do Statistics Journals Have Low Impact Factors?". Statistica Neerlandica. 63 (1): 52–62. doi:10.1111/j.1467-9574.2008.00408.x.
  28. Callaway, Ewen (14 July 2016). "Beat it, impact factor! Publishing elite turns against controversial metric". Nature. 535 (7611): 210–211. Bibcode:2016Natur.535..210C. doi: 10.1038/nature.2016.20224 . PMID   27411614.
  29. Joint Committee on Quantitative Assessment of Research (12 June 2008). "Citation Statistics" (PDF). International Mathematical Union.
  30. "Not-so-deep impact". Nature. 435 (7045): 1003–1004. 23 June 2005. doi: 10.1038/4351003b . PMID   15973362.
  31. Lozano, George A.; Larivière, Vincent; Gingras, Yves (2012). "The weakening relationship between the impact factor and papers' citations in the digital age". Journal of the American Society for Information Science and Technology. 63 (11): 2140–2145. arXiv: 1205.4328 . Bibcode:2012arXiv1205.4328L. doi:10.1002/asi.22731.
  32. Bohannon, John (2016). "Hate journal impact factors? New study gives you one more reason". Science . doi:10.1126/science.aag0643.
  33. "House of Commons – Science and Technology – Tenth Report". 2004-07-07. Retrieved 2008-07-28.
  34. Grant, Bob (21 June 2010). "New impact factors yield surprises". The Scientist. Retrieved 31 March 2011.
  35. mmcveigh (17 June 2010). "What does it mean to be #2 in Impact?" . Retrieved 2018-07-16.
  36. Seglen, P. O. (1997). "Why the impact factor of journals should not be used for evaluating research". BMJ . 314 (7079): 498–502. doi:10.1136/bmj.314.7079.497. PMC   2126010 . PMID   9056804.
  37. "EASE Statement on Inappropriate Use of Impact Factors". European Association of Science Editors. November 2007. Retrieved 2013-04-13.Cite journal requires |journal= (help)
  38. Serenko, A.; Dohan, M. (2011). "Comparing the expert survey and citation impact journal ranking methods: Example from the field of Artificial Intelligence" (PDF). Journal of Informetrics . 5 (4): 629–648. doi:10.1016/j.joi.2011.06.002.
  39. Cawkell, Anthony E. (1977). "Science perceived through the Science Citation Index". Endeavour. 1 (2): 57–62. doi:10.1016/0160-9327(77)90107-7. PMID   71986.
  40. Monastersky, Richard (14 October 2005). "The Number That's Devouring Science". The Chronicle of Higher Education.
  41. Arnold, Douglas N.; Fowler, Kristine K. (2011). "Nefarious Numbers". Notices of the American Mathematical Society . 58 (3): 434–437. arXiv: 1010.0278 . Bibcode:2010arXiv1010.0278A.
  42. PLoS Medicine Editors (6 June 2006). "The Impact Factor Game". PLOS Medicine. 3 (6): e291. doi:10.1371/journal.pmed.0030291. PMC   1475651 . PMID   16749869.
  43. Agrawal, A. (2005). "Corruption of Journal Impact Factors" (PDF). Trends in Ecology and Evolution . 20 (4): 157. doi:10.1016/j.tree.2005.02.002. PMID   16701362. Archived from the original (PDF) on 2010-06-19.
  44. Fassoulaki, A.; Papilas, K.; Paraskeva, A.; Patris, K. (2002). "Impact factor bias and proposed adjustments for its determination". Acta Anaesthesiologica Scandinavica . 46 (7): 902–5. doi:10.1034/j.1399-6576.2002.460723.x. PMID   12139549.
  45. Schuttea, H. K.; Svec, J. G. (2007). "Reaction of Folia Phoniatrica et Logopaedica on the Current Trend of Impact Factor Measures". Folia Phoniatrica et Logopaedica . 59 (6): 281–285. doi:10.1159/000108334. PMID   17965570.
  46. "Journal Citation Reports – Notices". Archived from the original on 2010-05-15. Retrieved 2009-09-24.
  47. Wilhite, A. W.; Fong, E. A. (2012). "Coercive Citation in Academic Publishing". Science. 335 (6068): 542–3. Bibcode:2012Sci...335..542W. doi:10.1126/science.1212540. PMID   22301307.
  48. Smith, Richard (1997). "Journal accused of manipulating impact factor". BMJ. 314 (7079): 463. doi:10.1136/bmj.314.7079.461d. PMC   2125988 . PMID   9056791.
  49. Gargouri, Yassine; Hajjem, Chawki; Lariviere, Vincent; Gingras, Yves; Carr, Les; Brody, Tim; Harnad, Stevan (2018). "The Journal Impact Factor: A Brief History, Critique, and Discussion of Adverse Effects". arXiv: 1801.08992 . Bibcode:2018arXiv180108992L.Cite journal requires |journal= (help)
  50. Curry, Stephen (2018). "Let's Move beyond the Rhetoric: It's Time to Change How We Judge Research". Nature. 554 (7691): 147. Bibcode:2018Natur.554..147C. doi: 10.1038/d41586-018-01642-w . PMID   29420505.
  51. Larivière, Vincent; Kiermer, Véronique; MacCallum, Catriona J.; McNutt, Marcia; Patterson, Mark; Pulverer, Bernd; Swaminathan, Sowmya; Taylor, Stuart; Curry, Stephen (2016). "A Simple Proposal for the Publication of Journal Citation Distributions". doi: 10.1101/062109 .Cite journal requires |journal= (help)
  52. 1 2 Hicks, Diana; Wouters, Paul; Waltman, Ludo; De Rijcke, Sarah; Rafols, Ismael (2015). "Bibliometrics: The Leiden Manifesto for Research Metrics". Nature. 520 (7548): 429–431. Bibcode:2015Natur.520..429H. doi: 10.1038/520429a . PMID   25903611.
  53. "Altmetrics: A Manifesto".
  54. Falagas, Matthew E.; Alexiou, Vangelis G. (2008). "The Top-Ten in Journal Impact Factor Manipulation". Archivum Immunologiae et Therapiae Experimentalis. 56 (4): 223–226. doi:10.1007/s00005-008-0024-5. PMID   18661263.
  55. Tort, Adriano B. L.; Targino, Zé H.; Amaral, Olavo B. (2012). "Rising Publication Delays Inflate Journal Impact Factors". PLOS One. 7 (12): e53374. Bibcode:2012PLoSO...753374T. doi:10.1371/journal.pone.0053374. PMC   3534064 . PMID   23300920.
  56. Fong, Eric A.; Wilhite, Allen W. (2017). "Authorship and Citation Manipulation in Academic Research". PLOS One. 12 (12): e0187394. Bibcode:2017PLoSO..1287394F. doi:10.1371/journal.pone.0187394. PMC   5718422 . PMID   29211744.
  57. "Citation Statistics". A Report from the Joint.
  58. Brembs, B. (2018). "Prestigious Science Journals Struggle to Reach Even Average Reliability". Frontiers in Human Neuroscience. 12: 37. doi:10.3389/fnhum.2018.00037. PMC   5826185 . PMID   29515380.
  59. Gargouri, Yassine; Hajjem, Chawki; Lariviere, Vincent; Gingras, Yves; Carr, Les; Brody, Tim; Harnad, Stevan (2009). "The Impact Factor's Matthew Effect: A Natural Experiment in Bibliometrics". arXiv: 0908.3177 . Bibcode:2009arXiv0908.3177L.Cite journal requires |journal= (help)
  60. Brembs, Björn; Button, Katherine; Munafò, Marcus (2013). "Deep impact: Unintended consequences of journal rank". Frontiers in Human Neuroscience. 7: 291. arXiv: 1301.3748 . Bibcode:2013arXiv1301.3748B. doi:10.3389/fnhum.2013.00291. PMC   3690355 . PMID   23805088.
  61. Vessuri, Hebe; Guédon, Jean-Claude; Cetto, Ana María (2014). "Excellence or Quality? Impact of the Current Competition Regime on Science and Scientific Publishing in Latin America and Its Implications for Development" (PDF). Current Sociology. 62 (5): 647–665. doi:10.1177/0011392113512839.
  62. "Open Access and the Divide between 'Mainstream" and "peripheral". Como Gerir e Qualificar Revistas Científicas: 1–25.
  63. McKiernan, Erin C.; Niles, Meredith T.; Fischman, Gustavo E.; Schimanski, Lesley; Nieves, Carol Muñoz; Alperin, Juan Pablo (2019). "How Significant Are the Public Dimensions of Faculty Work in Review, Promotion, and Tenure Documents?". eLife. 8. doi:10.7554/eLife.42254. PMC   6391063 . PMID   30747708.
  64. Rossner, Mike; Van Epps, Heather; Hill, Emma (2007). "Show Me the Data". The Journal of Cell Biology. 179 (6): 1091–1092. doi:10.1083/jcb.200711140. PMC   2140038 . PMID   18086910.
  65. Moore, Samuel; Neylon, Cameron; Paul Eve, Martin; Paul o'Donnell, Daniel; Pattinson, Damian (2017). "'Excellence R Us': University Research and the Fetishisation of Excellence". Palgrave Communications. 3. doi: 10.1057/palcomms.2016.105 .
  66. Owen, R.; MacNaghten, P.; Stilgoe, J. (2012). "Responsible Research and Innovation: From Science in Society to Science for Society, with Society". Science and Public Policy. 39 (6): 751–760. doi:10.1093/scipol/scs093.
  67. "International Council for Science statement". Icsu.org. 2014-05-02. Retrieved 2014-05-18.
  68. "Quality not Quantity: DFG Adopts Rules to Counter the Flood of Publications in Research". Deutsche Forschungsgemeinschaft. 23 February 2010. Retrieved 2018-07-16.
  69. "Not everything that can be counted counts …". League of European Research Universities. 16 March 2015. Archived from the original on 2017-12-01.
  70. Kiermer, Veronique (2016). "Measuring Up: Impact Factors Do Not Reflect Article Citation Rates". PLOS.
  71. "Ditching Impact Factors for Deeper Data". The Scientist. Retrieved 2016-07-29.
  72. Corneliussen, Steven T. (2016). "Bad summer for the journal impact factor". Physics Today . doi:10.1063/PT.5.8183.
  73. "Impact Factor, Immediacy Index, Cited Half-life". Swedish University of Agricultural Sciences. Archived from the original on 23 May 2008. Retrieved 30 October 2016.
  74. "Bibliometrics (journal measures)". Elsevier. Archived from the original on 2012-08-18. Retrieved 2012-07-09. a measure of the speed at which content in a particular journal is picked up and referred to
  75. "Glossary of Thomson Scientific Terminology". Thomson Reuters . Retrieved 2012-07-09.
  76. "Journal Citation Reports Contents – Immediacy Index" ((online)). Clarivate Analytics . Retrieved 2012-07-09. The Immediacy Index is the average number of times an article is cited in the year it is published. The journal Immediacy Index indicates how quickly articles in a journal are cited. The aggregate Immediacy Index indicates how quickly articles in a subject category are cited.
  77. Elsevier. "Metrics – Features – Scopus – Solutions | Elsevier". www.elsevier.com. Retrieved 2016-12-09.
  78. Van Noorden, Richard (2016). "Controversial impact factor gets a heavyweight rival". Nature. 540 (7633): 325–326. Bibcode:2016Natur.540..325V. doi: 10.1038/nature.2016.21131 . PMID   27974784.
  79. Jalalian M (2015). "The story of fake impact factor companies and how we detected them". Electronic Physician. 7 (2): 1069–72. doi:10.14661/2015.1069-1072. PMC   4477767 . PMID   26120416.
  80. Jalalian, M (2015). "The story of fake impact factor companies and how we detected them". Electronic Physician. 7 (2): 1069–72. doi:10.14661/2015.1069-1072. PMC   4477767 . PMID   26120416.
  81. Misleading Metrics Archived 2017-01-11 at the Wayback Machine
  82. "Misleading Metrics – Beall's List".
  83. Beall, Jeffrey. "Scholarly Open-Access – Fake impact factors". Archived from the original on 2016-03-21.
  84. "Thomson Reuters Intellectual Property & Science Master Journal List".
  85. Ebrahimzadeh, Mohammad H. (April 2016). "Validated Measures of Publication Quality: Guide for Novice Researchers to Choose an Appropriate Journal for Paper Submission". Archives of Bone and Joint Surgery. 4 (2): 94–96. PMC   4852052 . PMID   27200383.

Further reading