This article may be too technical for most readers to understand.(December 2020) |
The Leiden Manifesto for research metrics (LM) is a list of "ten principles to guide research evaluation", [1] published as a comment in Volume 520, Issue 7548 of Nature, on 22 April 2015. It was formulated by public policy professor Diana Hicks, scientometrics professor Paul Wouters, and their colleagues at the 19th International Conference on Science and Technology Indicators, held between 3–5 September 2014 in Leiden, The Netherlands. [2]
The LM was proposed as a guide to combat misuse of bibliometrics when evaluating scientific research literature. Examples of commonly used bibliometrics for science, or scientometrics, are the h-index, impact factor, and websites displaying indicators such as Altmetrics. According to Hicks et al., these metrics often pervasively misguide evaluations of scientific material. [3]
Authors | Diana Hicks, Paul Wouters, Ludo Waltman, Sarah de Rijcke, Ismael Rafols |
---|---|
Publisher | Nature |
Publication date | 22 April 2015 |
Website | leidenmanifesto.org |
Motivations for codification of the Leiden Manifesto arose from a growing worry that "impact-factor obsession" [1] was leading to inadequate judgement of scientific material that should be worthy of fair evaluation. The lead author of the LM, Diana Hicks, hoped that publishing in Nature would expand the ideas already commonplace in the scientometrics sphere to the broader scientific community. [4] Although the principles of the LM are not new to scientometricians, Hicks et al. desired to create a unification of ideas to guide future editors and reviewers of scientific literature.
Interest in the reform of how research assessment is conducted is an ongoing debate in the scientific community. A well known declaration intended to quell the use of the impact factor, the San Francisco Declaration on Research Assessment was published around two years prior to the manifesto. The LM broadened the ideas presented in the DORA, which has now since been signed by over 2000 organizations and over 15000 individuals. [5]
One of the main concerns about overuse of citation-based performance indicators came from the observation that smaller research organizations and institutions may be negatively affected by their metric indices. In one public debate at the Centre for Science and Technology Studies at Leiden University, it was acknowledged that indicators which measure citations may give "more weight to publications from fields with a high expected number of citations than to publications from fields with a low expected number of citations". [6]
Although the main focus of the LM is based on the use of scientometrics for research evaluation, in its background, Hicks et al. also explain why overuse of metrics can adversely affect the wider scholarly community, such as the position of universities in global rankings. [7] [8] According to Hicks et al., scientific metrics such as citation rate are used far too much for ranking the quality of universities (and thus the quality of their research output).
The background for the LM describes why misusing metrics is becoming a larger problem in the scientific community. The journal impact factor, originally created by Eugene Garfield as a method for librarians to collect data to facilitate selecting journals to purchase, is now mainly used as a method of judging journal quality. [9] This is seen by Hicks et al. as an abuse of data in order to examine research too hastily. For example, an impact factor, while a good metric to measure the size and experience of a journal, may or may not be sufficient to accurately describe the quality of its papers, and even less so for a single paper.
Consisting of ten concise principles, along with a description for each, the Leiden Manifesto aims to reconstruct the way that research evaluations by academic publishers and scientific institutions are done. Its emphasis lies in detailed and close evaluation of research, rather than the excessive use of quantitative data in evaluations. It aims to promote academic excellence and fairness with thorough scrutiny, as well as remove possible perverse incentives of using scientometrics, such as judgement of academic capability and university quality. [10]
The ten principles of the Leiden Manifesto are as follows: [1]
In 2016, the Leiden Manifesto was presented with the John Ziman Award by the European Association for the Study of Science and Technology for its effort to widen scientometrics knowledge to the scientific community as a whole. EASST president Fred Steward stated that the LM "emphasizes situatedness, in terms of different cognitive domains and research missions as well as the wider socioeconomic, national and regional context". [13] This award contributed to the solidification of the LM as a continuing public debate for the scholarly community. The award was received well by lead author Diana Hicks. [4]
The Leiden Manifesto gained popularity following its publication, mainly by the scholarly publishing community which looked to reform their practices. The LM is often cited alongside other similar publications; those being the DORA and UK based academic review The Metric Tide . [4] [14] In a public statement by the University of Leeds, support for committing to using the principles of the LM along with DORA and The Metric Tide was issued in order to further research excellence. [15]
LIBER, a collaboration of European research libraries issued a substantial review on the LM in 2017. It concluded as a viewpoint that the LM was a "solid foundation" on which academic libraries could base their assessment of metrics. [16]
Elsevier, a global leader in research publishing and information analytics, announced on 14 July 2020 that it was to endorse the LM to guide its development of improved research evaluation. Elsevier stated that the principles of the manifesto were already close in nature to their 2019 CiteScore metrics, which was in summary "improved calculation methodology" for "a more robust, fair and faster indicator of research impact". [17] This alignment further popularized the LM, and also illustrated the shift in practices of evaluation by prominent research publications.
The Loughborough University LIS-Bibliometrics committee chose to base their principles on those of the LM instead of the DORA, because according to their policy manager Elizabeth Gadd, the LM takes a "broader approach to the responsible use of all bibliometrics across a range of disciplines and settings". [18] Stephen Curry, the chair of the DORA steering committee, commented on this statement by emphasizing that DORA was aiming to extend its "disciplinary and geographical reach". [19] However, he still made it clear that a university should have the right to choose to follow either the DORA or the LM, or neither, as long as reasonable justification was provided.
David Moher et al. referenced the LM in a perspective for Issues in Science and Technology that the "right questions"(i.e. research planning, timeframe, reproducibility, and results) for assessing scientists were not being asked by academic institutions. Moher et al. criticize the obsession of journal impact factors and "gaming" by investigators of scientometrics. [20] Instead, Moher et al. advocate for usage of DORA and the LM when assessing individual scientists and research.
T. Kanchan and K. Krishan describe by a letter in Science and Engineering Ethics why the LM is "one of the best criteria" for assessing scientific research, especially considering the "rat race" for publications in the scholarly community. Kanchan and Krishan emphasize that use of the LM will lead to "progress of science and society at large". [21]
The impact factor (IF) or journal impact factor (JIF) of an academic journal is a scientometric index calculated by Clarivate that reflects the yearly mean number of citations of articles published in the last two years in a given journal, as indexed by Clarivate's Web of Science.
Bibliometrics is the application of statistical methods to the study of bibliographic data, especially in scientific and library and information science contexts, and is closely associated with scientometrics to the point that both fields largely overlap.
Scientometrics is a subfield of informetrics that studies quantitative aspects of scholarly literature. Major research issues include the measurement of the impact of research papers and academic journals, the understanding of scientific citations, and the use of such measurements in policy and management contexts. In practice there is a significant overlap between scientometrics and other scientific fields such as information systems, information science, science of science policy, sociology of science, and metascience. Critics have argued that overreliance on scientometrics has created a system of perverse incentives, producing a publish or perish environment that leads to low-quality research.
Informetrics is the study of quantitative aspects of information, it is an extension and evolution of traditional bibliometrics and scientometrics. Informetrics uses bibliometrics and scientometrics methods to study mainly the problems of literature information management and evaluation of science and technology. Informetrics is an independent discipline that uses quantitative methods from mathematics and statistics to study the process, phenomena, and law of informetrics. Informetrics has gained more attention as it is a common scientific method for academic evaluation, research hotspots in discipline, and trend analysis.
Citation impact or citation rate is a measure of how many times an academic journal article or book or author is cited by other articles, books or authors. Citation counts are interpreted as measures of the impact or influence of academic work and have given rise to the field of bibliometrics or scientometrics, specializing in the study of patterns of academic impact through citation analysis. The importance of journals can be measured by the average citation rate, the ratio of number of citations to number articles published within a given time period and in a given index, such as the journal impact factor or the citescore. It is used by academic institutions in decisions about academic tenure, promotion and hiring, and hence also used by authors in deciding which journal to publish in. Citation-like measures are also used in other fields that do ranking, such as Google's PageRank algorithm, software metrics, college and university rankings, and business performance indicators.
The h-index is an author-level metric that measures both the productivity and citation impact of the publications, initially used for an individual scientist or scholar. The h-index correlates with success indicators such as winning the Nobel Prize, being accepted for research fellowships and holding positions at top universities. The index is based on the set of the scientist's most cited papers and the number of citations that they have received in other publications. The index has more recently been applied to the productivity and impact of a scholarly journal as well as a group of scientists, such as a department or university or country. The index was suggested in 2005 by Jorge E. Hirsch, a physicist at UC San Diego, as a tool for determining theoretical physicists' relative quality and is sometimes called the Hirsch index or Hirsch number.
Journal Citation Reports (JCR) is an annual publication by Clarivate. It has been integrated with the Web of Science and is accessed from the Web of Science Core Collection. It provides information about academic journals in the natural and social sciences, including impact factors. JCR was originally published as a part of the Science Citation Index. Currently, the JCR, as a distinct service, is based on citations compiled from the Science Citation Index Expanded and the Social Sciences Citation Index. As of the 2023 edition, journals from the Arts and Humanities Citation Index and the Emerging Sources Citation Index have also been included.
Journal ranking is widely used in academic circles in the evaluation of an academic journal's impact and quality. Journal rankings are intended to reflect the place of a journal within its field, the relative difficulty of being published in that journal, and the prestige associated with it. They have been introduced as official research evaluation tools in several countries.
The Web of Science is a paid-access platform that provides access to multiple databases that provide reference and citation data from academic journals, conference proceedings, and other documents in various academic disciplines.
An academic discipline or academic field is a subdivision of knowledge that is taught and researched at the college or university level. Disciplines are defined and recognized by the academic journals in which research is published, and the learned societies and academic departments or faculties within colleges and universities to which their practitioners belong. Academic disciplines are conventionally divided into the humanities, the scientific disciplines, the formal sciences like mathematics and computer science; the social sciences are sometimes considered a fourth category.
A bibliometrician is a researcher or a specialist in bibliometrics. It is near-synonymous with an informetrican, a scientometrican and a webometrician, who study webometrics.
In scholarly and scientific publishing, altmetrics are non-traditional bibliometrics proposed as an alternative or complement to more traditional citation impact metrics, such as impact factor and h-index. The term altmetrics was proposed in 2010, as a generalization of article level metrics, and has its roots in the #altmetrics hashtag. Although altmetrics are often thought of as metrics about articles, they can be applied to people, journals, books, data sets, presentations, videos, source code repositories, web pages, etc.
The San Francisco Declaration on Research Assessment (DORA) is a statement that denounces the practice of correlating the journal impact factor to the merits of a specific scientist's contributions. Also according to this statement, this practice creates biases and inaccuracies when appraising scientific research. It also states that the impact factor is not to be used as a substitute "measure of the quality of individual research articles, or in hiring, promotion, or funding decisions".
Article-level metrics are citation metrics which measure the usage and impact of individual scholarly articles.
The CWTS Leiden Ranking is an annual global university ranking based exclusively on bibliometric indicators. The rankings are compiled by the Centre for Science and Technology Studies at Leiden University in the Netherlands. The Clarivate Analytics bibliographic database Web of Science is used as the source of the publication and citation data.
The University Ranking by Academic Performance (URAP) is a university ranking developed by the Informatics Institute of Middle East Technical University. Since 2010, it has been publishing annual national and global college and university rankings for top 2000 institutions. The scientometrics measurement of URAP is based on data obtained from the Institute for Scientific Information via Web of Science and inCites. For global rankings, URAP employs indicators of research performance including the number of articles, citation, total documents, article impact total, citation impact total, and international collaboration. In addition to global rankings, URAP publishes regional rankings for universities in Turkey using additional indicators such as the number of students and faculty members obtained from Center of Measuring, Selection and Placement ÖSYM.
Author-level metrics are citation metrics that measure the bibliometric impact of individual authors, researchers, academics, and scholars. Many metrics have been developed that take into account varying numbers of factors.
There are a number of approaches to ranking academic publishing groups and publishers. Rankings rely on subjective impressions by the scholarly community, on analyses of prize winners of scientific associations, discipline, a publisher's reputation, and its impact factor.
Ronald Rousseau is a Belgian mathematician and information scientist. He has obtained an international reputation for his research on indicators and citation analysis in the fields of bibliometrics and scientometrics.
The open science movement has expanded the uses scientific output beyond specialized academic circles.