Journal ranking

Last updated

Journal ranking is widely used in academic circles in the evaluation of an academic journal's impact and quality. Journal rankings are intended to reflect the place of a journal within its field, the relative difficulty of being published in that journal, and the prestige associated with it. They have been introduced as official research evaluation tools in several countries.

Contents

Measures

Traditionally, journal ranking "measures" or evaluations have been provided simply through institutional lists established by academic leaders or through a committee vote. These approaches have been notoriously politicized and inaccurate reflections of actual prestige and quality, as they would often reflect the biases and personal career objectives of those involved in ranking the journals; also causing the problem of highly disparate evaluations across institutions. [1] Consequently, many institutions have required external sources of evaluation of journal quality. The traditional approach here has been through surveys of leading academics in a given field, but this approach too has potential for bias, though not as profound as that seen with institution-generated lists. [2] Consequently, governments, institutions, and leaders in scientometric research have turned to a litany of observed bibliometric measures on the journal level that can be used as surrogates for quality and thus eliminate the need for subjective assessment. [1]

Consequently, several journal-level metrics have been proposed, most citation-based:

Discussion

Negative consequences of rankings are generally well-documented and relate to the performativity of using journal rankings for performance measurement purposes. [19] [20] Studies of methodological quality and reliability have found that "reliability of published research works in several fields may be decreasing with increasing journal rank", [21] contrary to widespread expectations. [22]

For example, McKinnon (2017) has analyzed how the ABS-AJG ranking, which in spite of its methodological shortcomings is widely accepted in British business schools, has had negative consequences for the transportation and logistics management disciplines. [23] A study published in 2021 compared the Impact Factor, Eigenfactor Score, SCImago Journal & Country Rank and the Source Normalized Impact per Paper, in journals related to Pharmacy, Toxicology and Biochemistry. It discovered there was "a moderate to high and significant correlation" between them. [24]

Thousands of universities and research bodies issued official statements denouncing the idea that research quality can be measured based on the uni-dimensional scale of a journal ranking, most notably by signing the San Francisco Declaration on Research Assessment (DORA), which asked "not [to] use journal-based metrics ... as a surrogate measure of the quality of individual research articles, to assess an individual scientist's contributions, or in hiring, promotion, or funding decisions". [25] The Community for Responsible Research in Business Management (cRRBM) asks whether "even the academy is being served when faculty members are valued for the quantity and placement of their articles, not for the benefit their research can have for the world". [26] Some academic disciplines such as management exhibit a journal ranking lists paradox: on the one hand, researchers are aware of the numerous limitations of ranking lists and their deleterious impact on scientific progress; on the other hand, they generally find journal ranking lists to be useful and employ them, in particular, when the use of ranking lists is not mandated by their institutions. [27]

National rankings

Several national and international rankings of journals exist, e.g.:

They have been introduced as official research evaluation tools in several countries. [40]

See also

Related Research Articles

<span class="mw-page-title-main">Academic journal</span> Peer-reviewed scholarly periodical

An academic journal or scholarly journal is a periodical publication in which scholarship relating to a particular academic discipline is published. Academic journals serve as permanent and transparent forums for the presentation, scrutiny, and discussion of research. They nearly universally require peer review for research articles or other scrutiny from contemporaries competent and established in their respective fields.

College and university rankings order institutions in higher education based on factors that vary depending on the ranking. Some rankings evaluate institutions within a single country, while others assess institutions worldwide. Rankings are typically conducted by magazines, newspapers, websites, governments, or academics. In addition to ranking entire institutions, specific programs, departments, and schools can be ranked. Some rankings consider measures of wealth, excellence in research, selective admissions, and alumni success. Rankings may also consider various combinations of measures of specialization expertise, student options, award numbers, internationalization, graduate employment, industrial linkage, historical reputation and other criteria.

Scopus is Elsevier's abstract and citation database launched in 2004. Scopus covers 36,377 titles from 11,678 publishers, of which 34,346 are peer-reviewed journals in top-level subject fields: life sciences, social sciences, physical sciences and health sciences. It covers three types of sources: book series, journals, and trade journals. Scopus also allows patent searches in a dedicated patent database Lexis-Nexis, albeit with a limited functionality.

The impact factor (IF) or journal impact factor (JIF) of an academic journal is a scientometric index calculated by Clarivate that reflects the yearly mean number of citations of articles published in the last two years in a given journal, as indexed by Clarivate's Web of Science.

<span class="mw-page-title-main">Bibliometrics</span> Statistical analysis of written publications

Bibliometrics is the use of statistical methods to analyse books, articles and other publications, especially in scientific contents. Bibliometric methods are frequently used in the field of library and information science. Bibliometrics is closely associated with scientometrics, the analysis of scientific metrics and indicators, to the point that both fields largely overlap.

Scientometrics is the field of study which concerns itself with measuring and analysing scholarly literature. Scientometrics is a sub-field of informetrics. Major research issues include the measurement of the impact of research papers and academic journals, the understanding of scientific citations, and the use of such measurements in policy and management contexts. In practice there is a significant overlap between scientometrics and other scientific fields such as information systems, information science, science of science policy, sociology of science, and metascience. Critics have argued that over-reliance on scientometrics has created a system of perverse incentives, producing a publish or perish environment that leads to low-quality research.

Citation analysis is the examination of the frequency, patterns, and graphs of citations in documents. It uses the directed graph of citations — links from one document to another document — to reveal properties of the documents. A typical aim would be to identify the most important documents in a collection. A classic example is that of the citations between academic articles and books. For another example, judges of law support their judgements by referring back to judgements made in earlier cases. An additional example is provided by patents which contain prior art, citation of earlier patents relevant to the current claim. The digitization of patent data and increasing computing power have led to a community of practice that uses these citation data to measure innovation attributes, trace knowledge flows, and map innovation networks.

<span class="mw-page-title-main">Google Scholar</span> Academic search service by Google

Google Scholar is a freely accessible web search engine that indexes the full text or metadata of scholarly literature across an array of publishing formats and disciplines. Released in beta in November 2004, the Google Scholar index includes peer-reviewed online academic journals and books, conference papers, theses and dissertations, preprints, abstracts, technical reports, and other scholarly literature, including court opinions and patents.

<span class="mw-page-title-main">Informetrics</span> Study of the quantitative aspects of information

Informetrics is the study of quantitative aspects of information, it is an extension and evolution of traditional bibliometrics and scientometrics. Informetrics uses bibliometrics and scientometrics methods to study mainly the problems of literature information management and evaluation of science and technology. Informetrics is an independent discipline that uses quantitative methods from mathematics and statistics to study the process, phenomena, and law of informetrics. Informetrics has gained more attention as it is a common scientific method for academic evaluation, research hotspots in discipline, and trend analysis.

Citation impact or citation rate is a measure of how many times an academic journal article or book or author is cited by other articles, books or authors. Citation counts are interpreted as measures of the impact or influence of academic work and have given rise to the field of bibliometrics or scientometrics, specializing in the study of patterns of academic impact through citation analysis. The importance of journals can be measured by the average citation rate, the ratio of number of citations to number articles published within a given time period and in a given index, such as the journal impact factor or the citescore. It is used by academic institutions in decisions about academic tenure, promotion and hiring, and hence also used by authors in deciding which journal to publish in. Citation-like measures are also used in other fields that do ranking, such as Google's PageRank algorithm, software metrics, college and university rankings, and business performance indicators.

The h-index is an author-level metric that measures both the productivity and citation impact of the publications, initially used for an individual scientist or scholar. The h-index correlates with success indicators such as winning the Nobel Prize, being accepted for research fellowships and holding positions at top universities. The index is based on the set of the scientist's most cited papers and the number of citations that they have received in other publications. The index has more recently been applied to the productivity and impact of a scholarly journal as well as a group of scientists, such as a department or university or country. The index was suggested in 2005 by Jorge E. Hirsch, a physicist at UC San Diego, as a tool for determining theoretical physicists' relative quality and is sometimes called the Hirsch index or Hirsch number.

Journal Citation Reports (JCR) is an annual publication by Clarivate. It has been integrated with the Web of Science and is accessed from the Web of Science Core Collection. It provides information about academic journals in the natural and social sciences, including impact factors. The JCR was originally published as a part of the Science Citation Index. Currently, the JCR, as a distinct service, is based on citations compiled from the Science Citation Index Expanded and the Social Sciences Citation Index. As of the 2023 edition, journals from the Arts and Humanities Citation Index and the Emerging Sources Citation Index will also be included.

The Eigenfactor score, developed by Jevin West and Carl Bergstrom at the University of Washington, is a rating of the total importance of a scientific journal. Journals are rated according to the number of incoming citations, with citations from highly ranked journals weighted to make a larger contribution to the eigenfactor than those from poorly ranked journals. As a measure of importance, the Eigenfactor score scales with the total impact of a journal. All else equal, journals generating higher impact to the field have larger Eigenfactor scores. Citation metrics like eigenfactor or PageRank-based scores reduce the effect of self-referential groups.

<span class="mw-page-title-main">SCImago Journal Rank</span> Metric of scholarly journals

The SCImago Journal Rank (SJR) indicator is a measure of the prestige of scholarly journals that accounts for both the number of citations received by a journal and the prestige of the journals where the citations come from.

<span class="mw-page-title-main">Altmetrics</span> Alternative metrics for analyzing scholarship

In scholarly and scientific publishing, altmetrics are non-traditional bibliometrics proposed as an alternative or complement to more traditional citation impact metrics, such as impact factor and h-index. The term altmetrics was proposed in 2010, as a generalization of article level metrics, and has its roots in the #altmetrics hashtag. Although altmetrics are often thought of as metrics about articles, they can be applied to people, journals, books, data sets, presentations, videos, source code repositories, web pages, etc.

<span class="mw-page-title-main">San Francisco Declaration on Research Assessment</span> 2012 manifesto against using the journal impact factor to assess a scientists work

The San Francisco Declaration on Research Assessment (DORA) is a statement that denounces the practice of correlating the journal impact factor to the merits of a specific scientist's contributions. Also according to this statement, this practice creates biases and inaccuracies when appraising scientific research. It also states that the impact factor is not to be used as a substitute "measure of the quality of individual research articles, or in hiring, promotion, or funding decisions".

The University Ranking by Academic Performance (URAP) is a university ranking developed by the Informatics Institute of Middle East Technical University. Since 2010, it has been publishing annual national and global college and university rankings for top 2000 institutions. The scientometrics measurement of URAP is based on data obtained from the Institute for Scientific Information via Web of Science and inCites. For global rankings, URAP employs indicators of research performance including the number of articles, citation, total documents, article impact total, citation impact total, and international collaboration. In addition to global rankings, URAP publishes regional rankings for universities in Turkey using additional indicators such as the number of students and faculty members obtained from Center of Measuring, Selection and Placement ÖSYM.

Author-level metrics are citation metrics that measure the bibliometric impact of individual authors, researchers, academics, and scholars. Many metrics have been developed that take into account varying numbers of factors.

There are a number of approaches to ranking academic publishing groups and publishers. Rankings rely on subjective impressions by the scholarly community, on analyses of prize winners of scientific associations, discipline, a publisher's reputation, and its impact factor.

The Leiden Manifesto for research metrics (LM) is a list of "ten principles to guide research evaluation", published as a comment in Volume 520, Issue 7548 of Nature, on 22 April 2015. It was formulated by public policy professor Diana Hicks, scientometrics professor Paul Wouters, and their colleagues at the 19th International Conference on Science and Technology Indicators, held between 3–5 September 2014 in Leiden, The Netherlands.

References

  1. 1 2 Lowry, Paul Benjamin; Gaskin, James; Humpherys, Sean L.; Moody, Gregory D.; Galletta, Dennis F.; Barlow, Jordan B.; Wilson, David W. (2013). "Evaluating Journal Quality and the Association for Information Systems Senior Scholars' Journal Basket Via Bibliometric Measures: Do Expert Journal Assessments Add Value?". Management Information Systems Quarterly . 37 (4): 993–1012. doi:10.25300/MISQ/2013/37.4.01. JSTOR   43825779. SSRN   2186798. Also, see YouTube video narrative of this paper at: Flag of Yukon.svg  Yukon
  2. Lowry, Paul Benjamin; Romans, Denton; Curtis, Aaron (2004). "Global Journal Prestige and Supporting Disciplines: A Scientometric Study of Information Systems Journals". Journal of the Association for Information Systems . 5 (2): 29–77. doi: 10.17705/1jais.00045 . SSRN   666145.
  3. Minasny, Budiman; Hartemink, Alfred E.; McBratney, Alex; Jang, Ho-Jun (2013-10-22). "Citations and the h index of soil researchers and journals in the Web of Science, Scopus, and Google Scholar". PeerJ . 1: e183. doi: 10.7717/peerj.183 . ISSN   2167-8359. PMC   3807595 . PMID   24167778.
  4. Serenko, Alexander; Dohan, Michael (2011). "Comparing the expert survey and citation impact journal ranking methods: Example from the field of Artificial Intelligence" (PDF). Journal of Informetrics . Elsevier. 5 (4): 629–648. doi:10.1016/j.joi.2011.06.002.
  5. "About OOIR: Journal-level data" . Retrieved 2023-03-14.
  6. Holsapple, Clyde W. (2008). "A publication power approach for identifying premier information systems journals". Journal of the Association for Information Science and Technology . Wiley Online Library. 59 (2): 166–185. doi:10.1002/asi.20679.
  7. Serenko, Alexander; Jiao, Changquan (2012). "Investigating Information Systems Research in Canada" (PDF). Canadian Journal of Administrative Sciences . Wiley Online Library. 29: 3–24. doi:10.1002/CJAS.214.
  8. Alhoori, Hamed; Furuta, Richard (2013). Can Social Reference Management Systems Predict a Ranking of Scholarly Venues?. Lecture Notes in Computer Science. Vol. 8092. Springer. pp. 138–143. CiteSeerX   10.1.1.648.3770 . doi:10.1007/978-3-642-40501-3_14. ISBN   978-3-64240-500-6.{{cite book}}: |work= ignored (help)
  9. Cornillier, Fabien; Charles, Vincent (2015). "Measuring the attractiveness of academic journals: A direct influence aggregation model" (PDF). Operations Research Letters. 43 (2): 172–176. doi:10.1016/j.orl.2015.01.007. S2CID   13310055.
  10. "Elsevier Announces Enhanced Journal Metrics SNIP and SJR Now Available in Scopus" (Press release). Elsevier. Retrieved 2014-07-27.
  11. Moed, Henk (2010). "Measuring contextual citation impact of scientific journals". Journal of Informetrics. Elsevier. 4 (3): 256–277. arXiv: 0911.2632 . doi:10.1016/j.joi.2010.01.002. S2CID   10644946.
  12. Pinski, Gabriel; Narin, Francis (1976). "Citation influence for journal aggregates of scientific publications: Theory with application to literature of physics". Information Processing and Management . 12 (5): 297–312. doi:10.1016/0306-4573(76)90048-0.
  13. Liebowitz, S. J.; Palmer, J. P. (1984). "Assessing the relative impacts of economics journals" (PDF). Journal of Economic Literature . 22 (1): 77–88. JSTOR   2725228.
  14. Palacios-Huerta, Ignacio; Volij, Oscar (2004). "The Measurement of Intellectual Influence". Econometrica . 72 (3): 963–977. CiteSeerX   10.1.1.165.6602 . doi:10.1111/j.1468-0262.2004.00519.x.
  15. Kodrzycki, Yolanda K.; Yu, Pingkang (2006). "New Approaches to Ranking Economics Journals". Contributions to Economic Analysis & Policy. 5 (1). CiteSeerX   10.1.1.178.7834 . doi:10.2202/1538-0645.1520.
  16. Bollen, Johan; Rodriguez, Marko A.; Van De Sompel, Herbert (December 2006). "MESUR: Usage-based metrics of scholarly impact". Proceedings of the 7th ACM/IEEE-CS joint conference on Digital libraries. Vol. 69. pp. 669–687. arXiv: cs.GL/0601030 . Bibcode:2006cs........1030B. doi:10.1145/1255175.1255273. ISBN   978-1-59593-644-8. S2CID   3115544.{{cite book}}: |journal= ignored (help)
  17. Bergstrom, C. T. (May 2007). "Eigenfactor: Measuring the value and prestige of scholarly journals". College & Research Libraries News . 68 (5): 314–316. doi: 10.5860/crln.68.5.7804 .
  18. West, Jevin Darwin. "Eigenfactor.org". Eigenfactor . Retrieved 2014-05-18.
  19. Espeland, Wendy Nelson; Sauder, Michael (2007). "Rankings and Reactivity: How Public Measures Recreate Social Worlds". American Journal of Sociology. 113: 1–40. doi:10.1086/517897. hdl: 1885/30995 . S2CID   113406795.
  20. Grant, David B.; Kovács, Gyöngyi; Spens, Karen (2018). "Questionable research practices in academia: Antecedents and consequences". European Business Review. 30 (2): 101–127. doi:10.1108/EBR-12-2016-0155.
  21. Brembs, Björn (2018). "Prestigious Science Journals Struggle to Reach Even Average Reliability". Frontiers in Human Neuroscience. 12: 37. doi: 10.3389/fnhum.2018.00037 . PMC   5826185 . PMID   29515380.
  22. Triggle, Chris R; MacDonald, Ross; Triggle, David J.; Grierson, Donald (2022-04-03). "Requiem for impact factors and high publication charges". Accountability in Research. 29 (3): 133–164. doi: 10.1080/08989621.2021.1909481 . PMID   33787413. S2CID   232430287. One might expect, therefore, that a high JIF factor indicates a higher standard of interest, accuracy and reliability of papers published therein. This is sometimes true but unfortunately is certainly not always the case (Brembs 2018, 2019). Thus, Björn Brembs (2019) concluded: "There is a growing body of evidence against our subjective notion of more prestigious journals publishing 'better' science. In fact, the most prestigious journals may be publishing the least reliable science."
  23. McKinnon, Alan C. (2017). "Starry-eyed II: The logistics journal ranking debate revisited". International Journal of Physical Distribution & Logistics Management. 47 (6): 431–446. doi:10.1108/IJPDLM-02-2017-0097.
  24. Aquino-Canchari, Christian Renzo; Ospina-Meza, Richard Fredi; Guillen-Macedo, Karla (2020-07-30). "Las 100 revistas de mayor impacto sobre farmacología, toxicología y farmacia". Revista Cubana de Investigaciones Biomédicas. 39 (3). ISSN   1561-3011.
  25. "Home". DORA.
  26. Glick, William; Tsui, Anne; Davis, Gerald (2018-05-02). Cutler, Dave (ed.). "The Moral Dilemma to Business Research". BizEd Magazine. Archived from the original on 2018-05-07.
  27. Serenko, Alexander; Bontis, Nick (2024). "Dancing with the Devil: The use and perceptions of academic journal ranking lists in the management field" (PDF). Journal of Documentation . in–press. doi:10.1108/JD-10-2023-0217. S2CID   266921800.
  28. "Australian Research Council ranking of journals worldwide". 2011-06-12. Archived from the original on 2011-06-12.
  29. Li, Xiancheng; Rong, Wenge; Shi, Haoran; Tang, Jie; Xiong, Zhang (2018-05-11). "The impact of conference ranking systems in computer science: a comparative regression analysis". Scientometrics. Springer Science and Business Media LLC. 116 (2): 879–907. doi:10.1007/s11192-018-2763-1. ISSN   0138-9130. S2CID   255013801.
  30. "CORE Rankings Portal". core.edu.au. Retrieved 2022-12-27.
  31. "Uddannelses- og Forskningsministeriet".
  32. "Julkaisufoorumi". December 2023.
  33. "Search in Norwegian List | Norwegian Register".
  34. "Rating of Scientific Journals – ANVUR – Agenzia Nazionale di Valutazione del Sistema Universitario e della Ricerca".
  35. "Chartered Association of Business Schools – Academic Journal Guide".
  36. "List of HEC Recognized Journals".
  37. "NAAS Score of Science Journals" (PDF). National Academy of Agricultural Sciences . 2022-01-01. Archived (PDF) from the original on 2023-03-15.
  38. "Polish Ministry of Higher Education and Science (2019)". www.bip.nauka.gov.pl. Retrieved 2019-10-12.
  39. "Polish Ministry of Higher Education and Science (2021)". www.bip.nauka.gov.pl. Retrieved 2021-02-09.
  40. Pontille, David; Torny, Didier (2010). "The controversial policies of journal ratings: Evaluating social sciences and humanities". Research Evaluation. 19 (5): 347–360. doi:10.3152/095820210X12809191250889. S2CID   53387400.