|
|
EDITORIAL |
|
Year : 2020 | Volume
: 15
| Issue : 3 | Page : 149-154 |
|
Journal metrics: Different from author metrics
Chengappa Kavadichanda
Department of Clinical Immunology, JIPMER, Puducherry, India
Date of Submission | 05-Mar-2020 |
Date of Acceptance | 20-May-2020 |
Date of Web Publication | 3-Sep-2020 |
Correspondence Address: Dr. Chengappa Kavadichanda JIPMER, Gorimedu, Dhanvantri Nagar, Puducherry - 605 005 India
 Source of Support: None, Conflict of Interest: None  | Check |
DOI: 10.4103/injr.injr_38_20
How to cite this article: Kavadichanda C. Journal metrics: Different from author metrics. Indian J Rheumatol 2020;15:149-54 |
Communication in Science is as important as the scientific discovery itself. Advances in the internet has resulted in easier communication and dissemination of scientific work. This also means that the work can easily be subjected to rigorous and ruthless evaluation. The dilemma of what, where and how to publish is faced to variable extents both by seasoned and novice researchers. This dilemma compounded by numerous jargons like the Impact factor, H factor, Altmetrics, and so on further deters the soft-hearted scientist in each one of us!
In an attempt to attribute merit, the scientific literature is weighed and measured using various statistical methods. The scores obtained by the statistical tools are meant to reflect the impact of research and are collectively known as bibliometrics.[1]
Bibliometric scores are broadly classified into two categories (2):
a. Classical/traditional
– Scores based on the number of citations each article acquires or the overall citation a journal gets.
Examples – Journal impact factor, cite score, H-index, and its variations.
b. Alternative metrics
– In the current scenario, the academic output across the world has surpassed 2.5 million articles per year. Publications are accessible much before the printed journals can reach the readers' hand. They are discussed and debated in the open via a variety of social networking sites. Using the traditional metrics in such a scenario may be outdated and archaic. In order to quantiy such dynamic involvement, several article-level alternative metrics have cropped up. The alternative metrics may further be classified as follows.
- Simple metrics: Scored on the basis of citations in peer review journals like the traditional metrics but is assisted by algorithms to overcome some of the shortcomings of the later.
Examples: SCImago journal rank (SJR), Eigenfactor Score (ES) and source-normalized impact per paper (SNIP).
- Hybrid metrics: Incorporates the overall impact of the article by considering various factors such as social media mentions, downloads, and citations across various platforms in a weighted manner along with the scores in the simple metrics.
Examples: Altmetrics and Plum Analytics.
These bibliometric scores are primarily used to quantify the impact an author/article or a journal has in the world of scholarly publication. Understanding the strengths and deficits of these are essential for an author, reviewer as well as a reader. For an author, the scores are important to understand as it lets her/him chose an appropriate journal to publish and also becomes an important component in the bio-sketch.[3] From a reviewer's point of view, peer reviews and post-publication reviews are increasingly contributing to certain alternative metrics, understanding these scores are important for self-assessment and growth as a reviewer. For the editors, the scores may reflect the credibility of the submitting author. Moreover, editors across fields are striving to improve the quality of the journal, and these metrics provide them with a numerical value to target [Table 1]. | Table 1: Summary of strengths and shortcomings of some of the important journal and author metrics
Click here to view |
Journal Metrics | |  |
Journal metrics are measures that are intended to provide insight into three aspects – impact (a proxy for quality and credibility), speed, and reach of the scientific literature, which are extremely important for all the stakeholders involved in scholarly publication.[4]
The first-ever metric was suggested by Charles Babbage by simply counting and cataloging the publications of the authors. Counting of publications then progressed to the counting of citations, ultimately resulting in various bibliometric scores.[5]
Classical Journal Metrics | |  |
Journal impact factor
Journal impact factor is calculated by Clarivate Analytics, formerly the IP and Science business of Thomson Reuters. The metrics were proposed in 1955 by Eugene Garfield and Sher from the Institute of Scientific Information.[5]
Before the digital boom in publication, the sole purpose of journal impact factor (JIF) was to help librarians judiciously choose worthy journals to be subscribed and added to their library's collection. Impact factor (IF), given its ease to calculate, has remained as one of the leading proxies for evaluating journal's quality even though libraries are not the same anymore.
Definition
The impact factor is calculated once in 3 years and reflects the performance of the journal in the preceding 2 years. It is calculated by counting the number of citation in year 3 to a journal's contents published in years 1 and 2 and then dividing it by the number of citable publications in that journal. Though there is a lot of ambiguity as to what has to be considered as a citable publication, the general consensus is to include only the original research reports and reviews in the “citable publication” list. This strategy excludes editorials, letters to the editor, news items, tributes and obituaries, correction notices, and meeting abstracts from the denominator. These exclusions are made in order to not penalize a journal for publishing materials like perspectives and meeting abstracts, which are important for communication and progress of science but are usually not cited.
Calculation of journal impact factor

*Original research reports and reviews.
Drawbacks of journal impact factor
The JIF gives insights into a journal's citation in general without looking at the performance of individual articles. Thus, IF is not useful in analyzing the quality of individual articles or researchers.[6] JIF gives undue importance to articles published only in the English language. This has resulted in pushing the work published from non-English speaking countries, and particularly those with a country name in the title, abstract, or keywords, into low-impact publications.[7]
The impact factor is calculated over the preceding 2 years, whereas the life of published papers is much longer than this. Impactful scientific work may have very long citation lives, which in reality speaks highly about the article.[8] Besides this, another important factor is the number of specialists in a particular field. For instance, rheumatology is a field with a relatively small number of professionals. This directly determines how much a particular Rheumatology journal is cited, thus resulting in an inevitable low JIF.[9]
Besides all this, JIF can be gamed by increasing the number of self-citation and choosing to publish only review articles to falsely inflate the impact of a particular journal.
CiteScore
Published in December 2016 by Elsevier, CiteScore (CS) is the average citations per document that a title receives over a 3-year period.[10] Unlike the JIF, the CS includes all the published material in a journal as numerators and denominators. This means not only original research papers and reviews but also letters, notes, editorials, conference papers, and other documents indexed by Scopus are included.
Though CS is claimed to provide a comprehensive, transparent, and current view of a journal's impact by the creators, it suffers from all the shortcomings that are seen with JIF. It is a part of the Scopus basket for journal metrics, which includes SNIP, SJR, citation and document counts and percentage cited.

Alternative Journal Metrics | |  |
To overcome the shortcomings of the traditional metrics, several alternative metrics have cropped up and are gaining relevance. The commonly used metrics under this category are SJR and ES.
SCImago journal rank
The SJR is developed from the information from the Scopus® database (Elsevier B.V.).[6] SJR is calculated by assigning a weighted value based on the subject field, quality, and reputation of the journal, which overcomes the possibility of lower scores in subjects with less number of experts. It then takes into account citations in the past 3 years form the Scopus database. Using the algorithm based journal ranking SJR manages to normalize for differences in citation behavior between subject fields. SJR also has algorithms to handle self-citations. Moreover, SJR is open access and is available free of cost at www.scimagojr.com.
Eigenfactor score
The Eigenfactor (ES) is an indicator of the global influence of JCR journals. The algorithm used by ES is similar to that used by Google to rank websites. It calculates the number of times an article is cited in the last 5 years. Along with the count of citations, the ES also considers the journals in which they have been cited.[11] Thus, the algorithm is designed to reflect a higher cite rating if the overall citation rate of the journal in which the work is cited is high. The ES also takes into account author and journal based self-citations, which might falsely inflate the impact of a publication.
Author or Article-Level Metrics | |  |
Once a researcher has published in a particular journal based on various journal metrics and other information, the impact and spread of that research need to reflect in the researcher's bio sketch, which can be measured using the following.
- Traditional/classical metrics
Example: H-Index, G-Index, PageRank Index, I-10 Index.
- Alternative metrics
Examples: Altmetrics, Plum Analytics.
- Author profiles.
Examples: ORCID, Researchgate, ResearcherID, Scopus author identifiers.
Traditional and alternative author metrics are algorithm-based scores that are given by professional and commercial bodies. On the other hand, author profiles are self-managed accounts more like a social media platform used to publicize, share, and discuss scholarly activities.[12]
Classical Author Level Metrics | |  |
H-Index
First introduced by Jorge Hirsch in 2005, H-index quantifies the cumulative impact of a researcher's publication.[13] The index not only gives an idea of the number of publications by an author but also the number of citations the author has garnered. The h-score can be manually calculated by arranging the publications in the descending order of the number of citations and identifying the numeric sequence of papers which some citations equal to or is greater than the rank of the sequence. An alternative to the manual calculation of H-index is using open-source options like the Publish or Perish software.[14]
Drawbacks and modifications of H-index
In a scientific career, citations start building up over the years. Hence, the H-index is not a good indicator of impact in the early stage of a researcher's career. Though corrective measure in the form of m-quotient was proposed by Hirsch himself to correct for career length, the quotient falls short in its purpose.[15] Citation practices are different across different fields, and comparison of the H-index across disciplines is not the right way to assess the worth of a researcher. Namazi (n)-index is a modification of H-index designed to overcome the bias in citation patterns across various scientific fields. The H-Index is independent of the author's order, which is another thorny subject in the authorship debate. It is not a weighted score, so the impact of publishing in a prestigious journal is not evident.[16] Though several modifications of the H-index have been proposed, they are still not widely used due to prevailing limitations.[15]
G-Index
Another problem encountered with the H-index is that the top publications of an author that are continuously being cited do not affect the impact. In order to overcome this G-index was introduced in 2006.[17] This index considers both the number of well-cited publications and their overall citation performance throughout the time of the journal's existence.
PageRank Index
This is a newer approach to overcome dependence on quantitative measures in the above-mentioned indices. The PageRank index gets its name from the PageRank algorithm of Google. Using this algorithm, weighted scores are given based on the PageRank score of the website citing the research. The system is able to differentiate between researchers who are in the early days of a scientific career and assign high scores even to those with limited but innovative publications.[18]
I-10 Index
I-10 index simply reflects the number of articles with at least 10 citations. It is calculated using google scholar and suffers from shortcomings that are similar to H-index.[19]
Source-normalised impact per paper
This is a citation-based score measuring the number of citations per paper in a 3 years window. The scores are weighted based on the total number of citations in the subject field. This strategy allows us to directly compare the SNIP scores across various fields.[20]
Alternative Author/article Level Metrics | |  |
Altmetrics
Altmetric scores (AS) scientifically quantifies the digital attention an article receives. The AS provides a weighted score based on the digital platform an article is discussed. They have inbuilt mechanisms to prevent self-citations and to detect repeated posts by the same person. The scoring tracks and screens social media, Wikipedia, public policy documents, blogs, and mainstream news. This score indirectly reflects the reach of each source.[21] The Altmetrics are represented by the familiar colourful donut [Figure 1]. The colours in the donut represents a source and the number in the centre represents the score from across various platforms.[22] | Figure 1: The Altmetric donut. The donut also serves as a hyperlink that takes the reader to the original research (modified with permission: Courtesy - altmetric.com, image captured on 24.2.2020)
Click here to view |
Plum Analytics
Plum Analytics was founded in 2012 as a hybrid method to measure the research impact of authors and organizations, which was acquired by Elsevier in 2017. The Plum Analytics gives out PlumX Metrics, which are comprehensive, article-level metrics. They provide insights into the ways people interact with individual pieces of research output (articles, conference proceedings, book chapters, and more). The PlumX metric is denoted by the plum print [Figure 2]. | Figure 2: Plum print: The size of color circle in the plum print represents the relative amount of activity in the associated category (replicated with permission: courtesy Elsevier research metrics. Image captured on 7.4.20)
Click here to view |
The plum print represents a composite of 5 categories of metrics.
Since 2017 PlumX metric is integrated with Scopus as the PlumX Metrics API. This can be used for finding aggregate metric counts for individual document identifiers on Scopus[23] and the various categories are depicted in [Figure 2].[24]
Impactstory
ImpactStory[25] is an open-source tool that is free for everyone to use. It is funded by the National Science Foundation and the Alfred P. Sloan Foundation. ImpactStory draws data for analysis from various sources, including social media sites like Facebook, Twitter, CiteULike, Delicious, journal indexing sites like PubMed, Scopus, CrossRef, scienceseeker, Mendeley, Wikipedia, and promotion sites like the slideshare, Dryad, and figshare.
Public Library of Science
The metric is available by default upon publishing in one of the Public Library of Science[26] (PLOS) journals. The score like most other alternative metric assimilates the data on online attention form HTML page views, downloads, PubMed Central usage, citations computed by third party databases and search engines (including Google Scholar and Scopus), mentions in social media (Facebook and Twitter), blogs and feedback within the comments section of the PLOS website.
Author Profiles | |  |
With the advent of social media, online profiles have become important components of a researcher's identity. Numerous profiling platforms have enabled researchers to effectively showcase their work, network with fellow researchers[27] and for appropriate author name disambiguation across various publications.[28] Some of the widely used author profiles are Scopus author identifiers, ResearcherID, and Open Researcher and Contributor Identification (ORCID).
Conclusion | |  |
The metrics for the evaluation of journals and researchers in the current form are far from perfect. But by understanding the advantages and shortcomings of each of them, we will be in a better position to judge and interpret the scores. In the time a better matrix is described, let us aim at doing good work and continue making scientific contributions.
References | |  |
1. | Cooper ID. Bibliometrics basics. J Med Libr Assoc 2015;103:217-8. |
2. | Van Noorden R. Online collaboration: Scientists and the social network. Nature 2014;512:126-9. |
3. | Li D, Agha L. Research funding. Big names or big ideas: Do peer-review panels select the best science proposals? Science 2015;348:434-8. |
4. | Pendlebury DA. The use and misuse of journal metrics and other citation indicators. Arch Immunol Ther Exp (Warsz) 2009;57:1-11. |
5. | Archambault É, Larivière V. History of the journal impact factor: Contingencies and consequences. Scientometrics 2009;79:635-49. |
6. | Roldan-Valadez E, Salazar-Ruiz SY, Ibarra-Contreras R, Rios C. Current concepts on bibliometrics: A brief review about impact factor, Eigenfactor score, CiteScore, SCImago Journal Rank, Source-Normalised Impact per Paper, H-index, and alternative metrics. Irani J Med Sci 2019;188:939-51. |
7. | Bredan A, Benamer HT, Bakoush O. Why are journals from less-developed countries constrained to low impact factors? Libyan J Med 2014;9:25774. |
8. | Favaloro EJ. Measuring the quality of journals and journal articles: The impact factor tells but a portion of the story. Semin Thromb Hemost 2008;34:7-25. |
9. | Ogden TL, Bartley DL. The ups and downs of journal impact factors. Ann Occup Hyg 2008;52:73-82. |
10. | |
11. | Bergstrom CT, West JD, Wiseman MA. The eigenfactor metrics. J Neurosci 2008;28:11433-4. |
12. | Gasparyan AY, Yessirkepov M, Duisenova A, Trukhachev VI, Kostyukova EI, Kitas GD. Researcher and author impact metrics: Variety, value, and context. J Korean Med Sci 2018;33:e139. |
13. | Hirsch JE. An index to quantify an individual's scientific research output. Proc Natl Acad Sci U S A 2005;102:16569-72. |
14. | |
15. | Post A, Li AY, Dai JB, Maniya AY, Haider S, Sobotka S, et al. C-Index and Subindices of the H-Index: New Variants of the h-index to Account for Variations in Author Contribution. Cureus. 2018;10:e2629. |
16. | Bornmann L, Daniel HD. The state of h index research. Is the h index the ideal way to measure research performance? EMBO Rep 2009;10:2-6. |
17. | Egghe L. Theory and practise of the g-index. Scientometrics 2006;69:131-52. |
18. | Senanayake U, Piraveenan M, Zomaya A. The Pagerank-index: Going beyond citation counts in quantifying scientific impact of researchers. PLoS One 2015;10:e0134794. |
19. | He C, Tan C, Lotfipour S. WestJEM's Impact Factor, h-index, and i10-index: Where We Stand. Western Journal of Emergency Medicine: Integrating Emergency Care with Population Health, 2014;15. Retrieved from: https://escholarship.org/uc/item/8mg8w8p1. [Last accessed on 2020 Feb 03]. |
20. | Oosthuizen JC, Fenton JE. Alternatives to the impact factor. Surgeon 2014;12:239-43. |
21. | Warren HR, Raison N, Dasgupta P. The rise of altmetrics. JAMA 2017;317:131-2. |
22. | |
23. | Champieux R, Plum X. J Med Libr Assoc 2015;103:63-4. |
24. | |
25. | |
26. | |
27. | Meishar-Tal H, Pieterse E. Why do academics use academic social networking sites? Int Rev Res Open Distrib Learn 2017;18:1-22. |
28. | da Silva JT. ORCID: The challenge ahead. Eur Sci Ed 2017;43:34. |
[Figure 1], [Figure 2]
[Table 1]
|