Part of a series on |
Citation metrics |
---|
The impact factor (IF) or journal impact factor (JIF) of an academic journal is a scientometric index calculated by Clarivate that reflects the yearly mean number of citations of articles published in the last two years in a given journal, as indexed by Clarivate's Web of Science.
As a journal-level metric, it is frequently used as a proxy for the relative importance of a journal within its field; journals with higher impact factor values are given the status of being more important, or carry more prestige in their respective fields, than those with lower values.
While frequently used by universities and funding bodies to decide on promotion and research proposals, it has been criticised for distorting good scientific practices. [1] [2] [3]
The impact factor was devised by Eugene Garfield, the founder of the Institute for Scientific Information (ISI) in Philadelphia. Impact factors began to be calculated yearly starting from 1975 for journals listed in the Journal Citation Reports (JCR). ISI was acquired by Thomson Scientific & Healthcare in 1992, [4] and became known as Thomson ISI. In 2018, Thomson-Reuters spun off and sold ISI to Onex Corporation and Baring Private Equity Asia. [5] They founded a new corporation, Clarivate, which is now the publisher of the JCR. [6]
In any given year, the two-year journal impact factor is the ratio between the number of citations received in that year for publications in that journal that were published in the two preceding years and the total number of "citable items" published in that journal during the two preceding years: [7] [8]
For example, Nature had an impact factor of 41.577 in 2017: [9]
This means that, on average, its papers published in 2015 and 2016 received roughly 42 citations each in 2017. 2017 impact factors are reported in 2018; they cannot be calculated until all of the 2017 publications have been processed by the indexing agency.
The value of impact factor depends on how to define "citations" and "publications"; the latter are often referred to as "citable items". In current practice, both "citations" and "publications" are defined exclusively by ISI as follows. "Publications" are items that are classed as "article", "review" or "proceedings paper" [10] in the Web of Science (WoS) database; other items like editorials, corrections, notes, retractions and discussions are excluded. WoS is accessible to all registered users, who can independently verify the number of citable items for a given journal. In contrast, the number of citations is extracted not from the WoS database, but from a dedicated JCR database, which is not accessible to general readers. Hence, the commonly used "JCR Impact Factor" is a proprietary value, which is defined and calculated by ISI and can not be verified by external users. [11]
New journals, which are indexed from their first published issue, will receive an impact factor after two years of indexing; in this case, the citations to the year prior to volume 1, and the number of articles published in the year prior to volume 1, are known zero values. Journals that are indexed starting with a volume other than the first volume will not get an impact factor until they have been indexed for three years. Occasionally, Journal Citation Reports assigns an impact factor to new journals with less than two years of indexing, based on partial citation data. [12] [13] The calculation always uses two complete and known years of item counts, but for new titles one of the known counts is zero. Annuals and other irregular publications sometimes publish no items in a particular year, affecting the count. The impact factor relates to a specific time period; it is possible to calculate it for any desired period. For example, the JCR also includes a five-year impact factor, which is calculated by dividing the number of citations to the journal in a given year by the number of articles published in that journal in the previous five years. [14] [15]
While originally invented as a tool to help university librarians to decide which journals to purchase, the impact factor soon became used as a measure for judging academic success. This use of impact factors was summarised by Hoeffel in 1998: [16]
Impact Factor is not a perfect tool to measure the quality of articles but there is nothing better and it has the advantage of already being in existence and is, therefore, a good technique for scientific evaluation. Experience has shown that in each specialty the best journals are those in which it is most difficult to have an article accepted, and these are the journals that have a high impact factor. Most of these journals existed long before the impact factor was devised. The use of impact factor as a measure of quality is widespread because it fits well with the opinion we have in each field of the best journals in our specialty....In conclusion, prestigious journals publish papers of high level. Therefore, their impact factor is high, and not the contrary.
As impact factors are a journal-level metric, rather than an article- or individual-level metric, this use is controversial. Eugene Garfield, the inventor of the JIF agreed with Hoeffel, [17] but warned about the "misuse in evaluating individuals" because there is "a wide variation [of citations] from article to article within a single journal". [18] Despite this warning, the use of the JIF has evolved, playing a key role in the process of assessing individual researchers, their job applications and their funding proposals. In 2005, The Journal of Cell Biology noted that:
Impact factor data ... have a strong influence on the scientific community, affecting decisions on where to publish, whom to promote or hire, the success of grant applications, and even salary bonuses. [19]
More targeted research has begun to provide firm evidence of how deeply the impact factor is embedded within formal and informal research assessment processes. A review in 2019 studied how often the JIF featured in documents related to the review, promotion, and tenure of scientists in US and Canadian universities. It concluded that 40% of universities focused on academic research specifically mentioned the JIF as part of such review, promotion, and tenure processes. [20] And a 2017 study of how researchers in the life sciences behave concluded that "everyday decision-making practices as highly governed by pressures to publish in high-impact journals". The deeply embedded nature of such indicators not only effect research assessment, but the more fundamental issue of what research is actually undertaken: "Given the current ways of evaluation and valuing research, risky, lengthy, and unorthodox project rarely take center stage." [21]
Numerous critiques have been made regarding the use of impact factors, both in terms of its statistical validity and also of its implications for how science is carried out and assessed. [3] [22] [23] [24] [25] A 2007 study noted that the most fundamental flaw is that impact factors present the mean of data that are not normally distributed, and suggested that it would be more appropriate to present the median of these data. [19] There is also a more general debate on the validity of the impact factor as a measure of journal importance and the effect of policies that editors may adopt to boost their impact factor (perhaps to the detriment of readers and writers). Other criticism focuses on the effect of the impact factor on behavior of scholars, editors and other stakeholders. [26] Criticism of impact factors also extends to its impact on researcher behavior. While the emphasis on high-impact journals may lead to strategic publishing practices that prioritize journal prestige over the quality and relevance of research, it's important to acknowledge the "privilege paradox". [27] Younger researchers, particularly those from under-represented regions, often lack the established reputation or networks to secure recognition outside of these metrics. [27] This can lead to a narrow focus on publishing in top-tier journals, potentially compromising the diversity of research topics and methodologies. Further criticisms argue that emphasis on impact factor results from the negative influence of neoliberal politics on academia. Some of these arguments demand not just replacement of the impact factor with more sophisticated metrics but also discussion on the social value of research assessment and the growing precariousness of scientific careers in higher education. [28] [29]
It has been stated that impact factors in particular and citation analysis in general are affected by field-dependent factors [30] which invalidate comparisons not only across disciplines but even within different fields of research of one discipline. [31] The percentage of total citations occurring in the first two years after publication also varies highly among disciplines from 1–3% in the mathematical and physical sciences to 5–8% in the biological sciences. [32] Thus impact factors cannot be used to compare journals across disciplines.
Impact factors are sometimes used to evaluate not only the journals but the papers therein, thereby devaluing papers in certain subjects. [33] In 2004, the Higher Education Funding Council for England was urged by the House of Commons Science and Technology Select Committee to remind Research Assessment Exercise panels that they are obliged to assess the quality of the content of individual articles, not the reputation of the journal in which they are published. [34] Other studies have repeatedly stated that impact factor is a metric for journals and should not be used to assess individual researchers or institutions. [35] [36] [37]
Because impact factor is commonly accepted as a proxy for research quality, some journals adopt editorial policies and practices, some acceptable and some of dubious purpose, to increase its impact factor. [38] [39] For example, journals may publish a larger percentage of review articles which generally are cited more than research reports. [8] Research undertaken in 2020 on dentistry journals concluded that the publication of "systematic reviews have significant effect on the Journal Impact Factor ... while papers publishing clinical trials bear no influence on this factor. Greater yearly average of published papers ... means a higher impact factor." [40]
Journals may also attempt to limit the number of "citable items"—i.e., the denominator of the impact factor equation—either by declining to publish articles that are unlikely to be cited (such as case reports in medical journals) or by altering articles (e.g., by not allowing an abstract or bibliography in hopes that Journal Citation Reports will not deem it a "citable item"). As a result of negotiations over whether items are "citable", impact factor variations of more than 300% have been observed. [41] Items considered to be uncitable—and thus are not incorporated in impact factor calculations—can, if cited, still enter into the numerator part of the equation despite the ease with which such citations could be excluded. This effect is hard to evaluate, for the distinction between editorial comment and short original articles is not always obvious. For example, letters to the editor may be part of either class.
Another less insidious tactic journals employ is to publish a large portion of its papers, or at least the papers expected to be highly cited, early in the calendar year. This gives those papers more time to gather citations. Several methods, not necessarily with nefarious intent, exist for a journal to cite articles in the same journal which will increase the journal's impact factor. [42] [43]
Beyond editorial policies that may skew the impact factor, journals can take overt steps to game the system. For example, in 2007, the specialist journal Folia Phoniatrica et Logopaedica , with an impact factor of 0.66, published an editorial that cited all its articles from 2005 to 2006 in a protest against the "absurd scientific situation in some countries" related to use of the impact factor. [44] The large number of citations meant that the impact factor for that journal increased to 1.44. As a result of the increase, the journal was not included in the 2008 and 2009 Journal Citation Reports. [45]
Coercive citation is a practice in which an editor forces an author to add extraneous citations to an article before the journal will agree to publish it, in order to inflate the journal's impact factor. [46] A survey published in 2012 indicates that coercive citation has been experienced by one in five researchers working in economics, sociology, psychology, and multiple business disciplines, and it is more common in business and in journals with a lower impact factor. [47] Editors of leading business journals banded together to disavow the practice. [48] However, cases of coercive citation have occasionally been reported for other disciplines. [49]
The journal impact factor was originally designed by Eugene Garfield as a metric to help librarians make decisions about which journals were worth indexing, as the JIF aggregates the number of citations to articles published in each journal. Since then, the JIF has become associated as a mark of journal "quality", and gained widespread use for evaluation of research and researchers instead, even at the institutional level. It thus has significant impact on steering research practices and behaviours. [50] [2] [51]
By 2010, national and international research funding institutions were already starting to point out that numerical indicators such as the JIF should not be considered as a measure of quality. [note 1] In fact, research was indicating that the JIF is a highly manipulated metric, [52] [53] [54] and the justification for its continued widespread use beyond its original narrow purpose seems due to its simplicity (easily calculable and comparable number), rather than any actual relationship to research quality. [55] [56] [57]
Empirical evidence shows that the misuse of the JIF—and journal ranking metrics in general—has a number of negative consequences for the scholarly communication system. These include gaps between the reach of a journal and the quality of its individual papers [25] and insufficient coverage of social sciences and humanities as well as research outputs from across Latin America, Africa, and South-East Asia. [58] Additional drawbacks include the marginalization of research in vernacular languages and on locally relevant topics and inducement to unethical authorship and citation practices. More generally, the impact factors fosters a reputation economy, where scientific success is based on publishing in prestigious journals ahead of actual research qualities such as rigorous methods, replicability and social impact. Using journal prestige and the JIF to cultivate a competition regime in academia has been shown to have deleterious effects on research quality. [59]
A number of regional and international initiatives are now providing and suggesting alternative research assessment systems, including key documents such as the Leiden Manifesto [note 2] and the San Francisco Declaration on Research Assessment (DORA). Plan S calls for a broader adoption and implementation of such initiatives alongside fundamental changes in the scholarly communication system. [note 3] As appropriate measures of quality for authors and research, concepts of research excellence should be remodelled around transparent workflows and accessible research results. [60] [61] [62]
JIFs are still regularly used to evaluate research in many countries, which is a problem since a number of issues remain around the opacity of the metric and the fact that it is often negotiated by publishers. [63] [64] [19]
Results of an impact factor can change dramatically depending on which items are considered as "citable" and therefore included in the denominator. [65] One notorious example of this occurred in 1988 when it was decided that meeting abstracts published in FASEB Journal would no longer be included in the denominator. The journal's impact factor jumped from 0.24 in 1988 to 18.3 in 1989. [66] Publishers routinely discuss with Clarivate how to improve the "accuracy" of their journals' impact factor and therefore get higher scores. [41] [25]
Such discussions routinely produce "negotiated values" which result in dramatic changes in the observed scores for dozens of journals, sometimes after unrelated events like the purchase by one of the larger publishers. [67]
Because citation counts have highly skewed distributions, [24] the mean number of citations is potentially misleading if used to gauge the typical impact of articles in the journal rather than the overall impact of the journal itself. [69] For example, about 90% of Nature 's 2004 impact factor was based on only a quarter of its publications. Thus the actual number of citations for a single article in the journal is in most cases much lower than the mean number of citations across articles. [70] Furthermore, the strength of the relationship between impact factors of journals and the citation rates of the papers therein has been steadily decreasing since articles began to be available digitally. [71]
The effect of outliers can be seen in the case of the article "A short history of SHELX", which included this sentence: "This paper could serve as a general literature citation when one or more of the open-source SHELX programs (and the Bruker AXS version SHELXTL) are employed in the course of a crystal-structure determination". This article received more than 6,600 citations. As a consequence, the impact factor of the journal Acta Crystallographica Section A rose from 2.051 in 2008 to 49.926 in 2009, more than Nature (at 31.434) and Science (at 28.103). [72] The second-most cited article in Acta Crystallographica Section A in 2008 had only 28 citations. [73]
Critics of the JIF state that use of the arithmetic mean in its calculation is problematic because the pattern of citation distribution is skewed [74] and citation distributions metrics have been proposed as an alternative to impact factors. [75] [76] [77]
However, there have also been pleas to take a more nuanced approach to judging the distribution skewness of the impact factor. Ludo Waltman and Vincent Antonio Traag, in their 2021 paper, ran numerous simulations and concluded that "statistical objections against the use of the IF at the level of individual articles are not convincing", and that "the IF may be a more accurate indicator of the value of an article than the number of citations of the article". [1]
While the underlying mathematical model is publicly known, the dataset which is used to calculate the JIF is not publicly available. This prompted criticism: "Just as scientists would not accept the findings in a scientific paper without seeing the primary data, so should they not rely on Thomson Scientific's impact factor, which is based on hidden data". [19] However, a 2019 article demonstrated that "with access to the data and careful cleaning, the JIF can be reproduced", although this required much labour to achieve. [78] A 2020 research paper went further. It indicated that by querying open access or partly open-access databases, like Google Scholar, ResearchGate, and Scopus, it is possible to calculate approximate impact factors without the need to purchase Web of Science / JCR. [79]
Just as the impact factor has attracted criticism for various immediate problems associated with its application, so has there also been criticism that its application undermines the broader process of science. Research has indicated that bibliometrics figures, particularly the impact factor, decrease the quality of peer review an article receives, [80] cause a reluctance to share data, [21] decrease the quality of articles, [81] and a reduce the scope in of publishable research. "For many researchers the only research questions and projects that appear viable are those that can meet the demand of scoring well in terms of metric performance indicators – and chiefly the journal impact factor.". [21] Furthermore, the process of publication and science is slowed down – authors automatically try and publish with the journals with the highest impact factor – "as editors and reviewers are tasked with reviewing papers that are not submitted to the most appropriate venues". [78]
Given the growing criticism and its widespread usage as a means of research assessment, organisations and institutions have begun to take steps to move away from the journal impact factor. In November 2007 the European Association of Science Editors (EASE) issued an official statement recommending "that journal impact factors are used only—and cautiously—for measuring and comparing the influence of entire journals, but not for the assessment of single papers, and certainly not for the assessment of researchers or research programmes". [23]
In July 2008, the International Council for Science Committee on Freedom and Responsibility in the Conduct of Science issued a "statement on publication practices and indices and the role of peer review in research assessment", suggesting many possible solutions—e.g., considering a limit number of publications per year to be taken into consideration for each scientist, or even penalising scientists for an excessive number of publications per year—e.g., more than 20. [82]
In February 2010, the Deutsche Forschungsgemeinschaft (German Research Foundation) published new guidelines to reduce the number of publications that could be submitted when applying for funding: "The focus has not been on what research someone has done but rather how many papers have been published and where." They noted that for decisions concerning "performance-based funding allocations, postdoctoral qualifications, appointments, or reviewing funding proposals, [where] increasing importance has been given to numerical indicators such as the h-index and the impact factor". [83] The UK's Research Assessment Exercise for 2014 also banned the journal impact factor [84] although evidence suggested that this ban was often ignored. [85]
In response to growing concerns over the inappropriate use of journal impact factors in evaluating scientific outputs and scientists themselves, the American Society for Cell Biology together with a group of editors and publishers of scholarly journals created the San Francisco Declaration on Research Assessment (DORA). Released in May 2013, DORA has garnered support from thousands of individuals and hundreds of institutions, [29] including in March 2015 the League of European Research Universities (a consortium of 21 of the most renowned research universities in Europe), [86] who have endorsed the document on the DORA website.
Publishers, even those with high impact factors, also recognised the flaws. [87] Nature magazine criticised the over-reliance on JIF, pointing not just to its statistical flaws but to negative effects on science: "The resulting pressures and disappointments are nothing but demoralizing, and in badly run labs can encourage sloppy research that, for example, fails to test assumptions thoroughly or to take all the data into account before submitting big claims." [22] Various publishers now use a mixture of metrics on their website; the PLOS series of journals does not display the impact factor. [88] Microsoft Academic took a similar view, stating that h-index, EI/SCI and journal impact factors are not shown because "the research literature has provided abundant evidence that these metrics are at best a rough approximation of research impact and scholarly influence." [89]
In 2021, Utrecht University promised to abandon all quantitative bibliometrics, including the impact factor. The university stated that "it has become a very sick model that goes beyond what is really relevant for science and putting science forward". [90] [91] This followed a 2018 decision by the main Dutch funding body for research, NWO, to remove all references to journal impact factors and the h-index in all call texts and application forms. [92] Utrecht's decision met with some resistance. An open letter signed by over 150 Dutch academics argued that, while imperfect, the JIF is still useful, and that omitting it "will lead to randomness and a compromising of scientific quality". [93]
Some related metrics, also calculated and published by the same organization, include:
A given journal may attain a different quartile or percentile in different categories.
As with the impact factor, there are some nuances to this: for example, Clarivate excludes certain article types (such as news items, correspondence, and errata) from the denominator. [99] [100] [101] [10]
Additional journal-level metrics are available from other organizations. For example, CiteScore is a metric for serial titles in Scopus launched in December 2016 by Elsevier. [102] [103] While these metrics apply only to journals, there are also author-level metrics, such as the h-index, that apply to individual researchers. In addition, article-level metrics measure impact at an article level instead of journal level.
Other more general alternative metrics, or "altmetrics", that include article views, downloads, or mentions in social media, offer a different perspective on research impact, concentrating more on immediate social impact in and outside academia. [62] [104]
Fake impact factors or bogus impact factors are produced by certain companies or individuals. [105] According to an article published in the Electronic Physician, these include Global Impact Factor, Citefactor, and Universal Impact Factor. [105] Jeffrey Beall maintained a list of such misleading metrics. [106] [107] Another deceitful practice is reporting "alternative impact factors", calculated as the average number of citations per article using citation indices other than JCR such as Google Scholar (e.g., "Google-based Journal Impact Factor") or Microsoft Academic. [108]
False impact factors are often used by predatory publishers. [109] [110] Consulting Journal Citation Reports' master journal list can confirm if a publication is indexed by the Journal Citation Reports. [111] The use of fake impact metrics is considered a red flag. [112]
An academic journal or scholarly journal is a periodical publication in which scholarship relating to a particular academic discipline is published. They serve as permanent and transparent forums for the presentation, scrutiny, and discussion of research. They nearly universally require peer review for research articles or other scrutiny from contemporaries competent and established in their respective fields.
Scopus is a scientific abstract and citation database, launched by the academic publisher Elsevier as a competitor to older Web of Science in 2004. An ensuing competition between the two databases has been characterized as "intense" and is considered to significantly benefit their users in terms of continuous improvent in coverage, search/analysis capabilities, but not in price. Free database The Lens completes the triad of main universal academic research databases.
The Institute for Scientific Information (ISI) was an academic publishing service, founded by Eugene Garfield in Philadelphia in 1956. ISI offered scientometric and bibliographic database services. Its specialty was citation indexing and analysis, a field pioneered by Garfield.
Scientometrics is a subfield of informetrics that studies quantitative aspects of scholarly literature. Major research issues include the measurement of the impact of research papers and academic journals, the understanding of scientific citations, and the use of such measurements in policy and management contexts. In practice there is a significant overlap between scientometrics and other scientific fields such as information systems, information science, science of science policy, sociology of science, and metascience. Critics have argued that overreliance on scientometrics has created a system of perverse incentives, producing a publish or perish environment that leads to low-quality research.
Citation impact or citation rate is a measure of how many times an academic journal article or book or author is cited by other articles, books or authors. Citation counts are interpreted as measures of the impact or influence of academic work and have given rise to the field of bibliometrics or scientometrics, specializing in the study of patterns of academic impact through citation analysis. The importance of journals can be measured by the average citation rate, the ratio of number of citations to number articles published within a given time period and in a given index, such as the journal impact factor or the citescore. It is used by academic institutions in decisions about academic tenure, promotion and hiring, and hence also used by authors in deciding which journal to publish in. Citation-like measures are also used in other fields that do ranking, such as Google's PageRank algorithm, software metrics, college and university rankings, and business performance indicators.
The h-index is an author-level metric that measures both the productivity and citation impact of the publications, initially used for an individual scientist or scholar. The h-index correlates with success indicators such as winning the Nobel Prize, being accepted for research fellowships and holding positions at top universities. The index is based on the set of the scientist's most cited papers and the number of citations that they have received in other publications. The index has more recently been applied to the productivity and impact of a scholarly journal as well as a group of scientists, such as a department or university or country. The index was suggested in 2005 by Jorge E. Hirsch, a physicist at UC San Diego, as a tool for determining theoretical physicists' relative quality and is sometimes called the Hirsch index or Hirsch number.
Journal Citation Reports (JCR) is an annual publication by Clarivate. It has been integrated with the Web of Science and is accessed from the Web of Science Core Collection. It provides information about academic journals in the natural and social sciences, including impact factors. JCR was originally published as a part of the Science Citation Index. Currently, the JCR, as a distinct service, is based on citations compiled from the Science Citation Index Expanded and the Social Sciences Citation Index. As of the 2023 edition, journals from the Arts and Humanities Citation Index and the Emerging Sources Citation Index have also been included.
Journal ranking is widely used in academic circles in the evaluation of an academic journal's impact and quality. Journal rankings are intended to reflect the place of a journal within its field, the relative difficulty of being published in that journal, and the prestige associated with it. They have been introduced as official research evaluation tools in several countries.
The Web of Science is a paid-access platform that provides access to multiple databases that provide reference and citation data from academic journals, conference proceedings, and other documents in various academic disciplines.
The Eigenfactor score, developed by Jevin West and Carl Bergstrom at the University of Washington, is a rating of the total importance of a scientific journal. Journals are rated according to the number of incoming citations, with citations from highly ranked journals weighted to make a larger contribution to the eigenfactor than those from poorly ranked journals. As a measure of importance, the Eigenfactor score scales with the total impact of a journal. All else equal, journals generating higher impact to the field have larger Eigenfactor scores. Citation metrics like eigenfactor or PageRank-based scores reduce the effect of self-referential groups.
The SCImago Journal Rank (SJR) indicator is a measure of the prestige of scholarly journals that accounts for both the number of citations received by a journal and the prestige of the journals where the citations come from.
Methods of Information in Medicine is a peer-reviewed scientific journal covering research in medical informatics. It is an official journal of the International Medical Informatics Association, the European Federation for Medical Informatics, and the German Association for Medical Informatics, Biometry and Epidemiology. It is the oldest and longest running journal in its field.
In scholarly and scientific publishing, altmetrics are non-traditional bibliometrics proposed as an alternative or complement to more traditional citation impact metrics, such as impact factor and h-index. The term altmetrics was proposed in 2010, as a generalization of article level metrics, and has its roots in the #altmetrics hashtag. Although altmetrics are often thought of as metrics about articles, they can be applied to people, journals, books, data sets, presentations, videos, source code repositories, web pages, etc.
Predatory publishing, also write-only publishing or deceptive publishing, is an exploitative academic publishing business model, where the journal or publisher prioritizes self-interest at the expense of scholarship. It is characterized by misleading information, deviates from the standard peer-review process, is highly non-transparent, and often utilizes aggressive solicitation practices.
The San Francisco Declaration on Research Assessment (DORA) is a statement that denounces the practice of correlating the journal impact factor to the merits of a specific scientist's contributions. Also according to this statement, this practice creates biases and inaccuracies when appraising scientific research. It also states that the impact factor is not to be used as a substitute "measure of the quality of individual research articles, or in hiring, promotion, or funding decisions".
Respirology is a peer-reviewed medical journal published by Wiley on behalf of the Asian Pacific Society of Respirology. The word respirology is derived from the Latin root respirare, "to breathe" and the Greek root logos, "knowledge". The journal covers clinical respiratory biology and disease, including epidemiology, intensive and critical care medicine, pathology, physiology, thoracic surgery, and general medicine, as it relates to respiratory biology and disease.
Author-level metrics are citation metrics that measure the bibliometric impact of individual authors, researchers, academics, and scholars. Many metrics have been developed that take into account varying numbers of factors.
Metascience is the use of scientific methodology to study science itself. Metascience seeks to increase the quality of scientific research while reducing inefficiency. It is also known as "research on research" and "the science of science", as it uses research methods to study how research is done and find where improvements can be made. Metascience concerns itself with all fields of research and has been described as "a bird's eye view of science". In the words of John Ioannidis, "Science is the best thing that has happened to human beings ... but we can do it better."
CiteScore (CS) of an academic journal is a measure reflecting the yearly average number of citations to recent articles published in that journal. It is produced by Elsevier, based on the citations recorded in the Scopus database. Absolute rankings and percentile ranks are also reported for each journal in a given subject area.
The Leiden Manifesto for research metrics (LM) is a list of "ten principles to guide research evaluation", published as a comment in Volume 520, Issue 7548 of Nature, on 22 April 2015. It was formulated by public policy professor Diana Hicks, scientometrics professor Paul Wouters, and their colleagues at the 19th International Conference on Science and Technology Indicators, held between 3–5 September 2014 in Leiden, The Netherlands.
A measure of the speed at which content in a particular journal is picked up and referred to.
The Immediacy Index is the average number of times an article is cited in the year it is published. The journal Immediacy Index indicates how quickly articles in a journal are cited. The aggregate Immediacy Index indicates how quickly articles in a subject category are cited.