The utilization of paper-level classification system on the evaluation of journal impact
TThe utilization of paper-level classification system onthe evaluation of journal impact
Zhesi Shen a, ∗ , Sichao Tong a , Fuyou Chen a , Liying Yang a a National Science Library, Chinese Academy of Sciences, Beijing 100190, P. R. China
Abstract
CAS Journal Ranking, a ranking system of journals based on the bibliometricindicator of citation impact, has been widely used in meso and macro-scale re-search evaluation in China since its first release in 2004. The ranking’s coverageis journals which contained in the Clarivate’s Journal Citation Reports (JCR). Thispaper will mainly introduce the upgraded version of the 2019 CAS journal rank-ing. Aiming at limitations around the indicator and classification system utilizedin earlier editions, also the problem of journals’ interdisciplinarity or multidis-ciplinarity, we will discuss the improvements in the 2019 upgraded version ofCAS journal ranking (1) the CWTS paper-level classification system, a more fine-grained system, has been utilized, (2) a new indicator, Field Normalized CitationSuccess Index (FNCSI), which ia robust against not only extremely highly citedpublications, but also the wrongly assigned document type, has been used, and (3)the calculation of the indicator is from a paper-level. In addition, this paper willpresent a small part of ranking results and an interpretation of the robustness of thenew FNCSI indicator. By exploring more sophisticated methods and indicators,like the CWTS paper-level classification system and the new FNCSI indicator,CAS Journal Ranking will continue its original purpose for responsible researchevaluation.
Keywords:
Journal ranking, Field normalization, Citation Success Index ∗ Corresponding author
Email address: [email protected] (Zhesi Shen)
Preprint submitted to QSS June 11, 2020 a r X i v : . [ c s . D L ] J un . Introduction The CAS journal ranking, an annually released journal ranking by the Cen-ter of Scientometrics (CoS), National Science Library of Chinese Academy ofSciences (CAS), is a journal ranking widely used in China. It ranks journals con-tained in the Clarivate’s Journal Citation Reports (JCR), based on bibliometricsdata. We ’ll sketch out its history and mainly introduce the upgraded versionof the 2019 CAS journal ranking, which we firstly utilize the CWTS paper-levelclassification system and a new indicator, the Field Normalized Citation SuccessIndex (FNCSI).The non-Field Normalized impact factor (JIF), which has been widely used asa journal indicator, performs differently in different research domains. Around theyear 2000, in practical administrative work, the CoS research group has graduallyidentified that the impact factor was in a misused situation in most cases at thattime in China. Aiming to compare or analyze journals separately in different sci-entific domains, the CoS research group released the first edition of CAS journalranking in 2004, becoming popularly used in China. Journals can be grouped bysubject area (major areas developed from degree classification by Degree Officeof the Ministry of Education of the People’s Republic of China), subject cate-gory (the same specific subject categories developed from the JCR journal subjectcategories in the Web of Science database).CAS journal ranking has been applied in many cases, varying from supportingrelated scientific policy-making of institutions to providing journals’ informationto researchers. For the institutional level, they can know the performance of scien-tific output via drawing their distributions in CAS journal ranking, this informa-tion can help them when making related policies. Among the cash-per-publicationreward policies in China, CAS journal ranking plays a dominant role. Chineseuniversities usually reward researchers for scientific output, motivating scientificresearch. Quan et al. (2017) analyze 168 reward policies in China, and they findthat there is an increasing trend of adopting CAS journal ranking in Chinese uni-versities from 2005, after the first edition of CAS journal ranking was released.And there are 99 of reward policies taking CAS journal ranking as the referenceby 2016. For researchers, CAS journal ranking can help them know journals oftargeted fields, from a relatively comprehensive view, when submitting their re-search output. Additionally, some journals utilize CAS journal ranking as thesource of information, about themselves and other journals.2 .2. Limitations of old CAS Journal Ranking
A limitation relates to indicator exists in old CAS journal ranking. For a jour-nal, the citation distributions are skewed, and JIF can be vastly affected by the tailof highly-cited papers. We previously utilize a three-year averaged JIF to allevi-ate such fluctuation. However, it is still not robust enough against occasionallyhighly-cited papers.The second limitation is that the journal classification system used in the oldCAS journal ranking is not fine-grained. Regarding citation practices, Garfield(1979) proposes the citation potential which can be defined as the probability ofbeing cited, perform significantly differently in different fields, and we previouslyuse the JCR journal subject categories in the Web of Science database. However, itis still not fine-grained, differences in citation also exist within fields (e.g., citationperforms differ between different areas within a medical field in the study by vanEck et al. (2013), based on the subject categories in the WoS database, which weuse in old CAS journal ranking).We plot a science map for journals from all fields (please see Figure 1) witheach dot representing a journal and the color representing potential citation. Thelayout of this map is used in an earlier paper (Shen et al., 2019) based on journalcitation network. Here we use journal’s expected JIF as an indicator for potentialcitation, the detailed formula can be found in the data and method section. Thecolor of each dot is related to the value of the corresponding journal’s expectedJIF: the more red/blue the color is, the larger/smaller the value is. Figure 1) indi-cates a clear distinction between the potential citation between different researchfields. We can see the phenomenon of citation performs differ between differentareas exist within not only the above studied medical fields but also many otherfields, for example, the upper part and the lower part of the Math category obvi-ously perform differently.We then take journals from JCR category: Statistics & Probability as an exam-ple. Looking at Figure 2, each dot represent a journal, we color journals titled withprobability in blue, and in general, most blue dots have smaller expected JIF, indi-cating that distinction of citation potential probably exists between journals fromdifferent topic, within Statistics & Probability category, e.g., Probability relatedjournals perform more weakly in citation potential.A third limitation is typically related to journals’ interdisciplinarity or multi-disciplinarity. In addition to multidisciplinary scopes included in more journalsfrom a general view, research topic can span across established disciplines (Ley-desdorff, 2007), bringing benefits and challenges, especially in journal impact3 igure 1: Map of scientific journals with expected JIF.
Expected JIF JI F OthersJournal Title with PROB
Figure 2: Correlation of JIF and expected JIF for journals in Statistics and Probability category. . Refinements in this release include the followings: • The CWTS paper-level classification system, a more fine-grained system,has been utilized to address the above classification system related problemand journals’ interdisciplinarity related problem. • Instead of JIF, a new indicator, Field Normalized Citation Success Index(FNCSI), has been used in the upgraded version. On the insensitivity side,compared with other citation impact factors, e.g., the three-year averageJIF utilized in earlier editions, it excels no merely in the robustness of theoccasional ultra-small number of extremely highly cited publications, butalso in the robustness against the wrongly assigned document type. • In addition, from a paper-level instead of journal-level, we calculate theindicator within article/review type papers.More detailed information about the above refinements will be discussed laterin this paper. Data and Methods section will introduce data coverage, CWTSpaper-level classification system and the indicator utilized in the upgraded versionof the 2019 CAS journal ranking. Results section includes a small part of the CASjournal ranking result and interpretation regarding the advantage of FNCSI. Wefinally discuss that attention should be paid on how to use CAS journal rankingappropriately for responsible research evaluation. Ongoing work and future planswill also be discussed. . Method and Data The CAS journal ranking includes the journals which contained in the Clar-ivate’ Journal Citation Reports (JCR) (JCR2018, 2019). For journals’ citationsdata, we use Journal Impact Factor contributing items, which released by Clari-vate’ Journal Citation Reports. This contains citations in year Y of each articleand review, published in years Y-1 and Y-2, which counted towards the journal’simpact factor.
The data utilized in the CWTS paper-level classification was collected fromClarivate’ Web of Science database, with the document types article and review,which were published between 2000 and 2018, and this classification system onlyincluded publications from the SCI and SSCI database. For the details of con-structing the CWTS paper-level classification system, we refer to Waltman & vanEck (2012, 2013a) for a more detailed introduction of the classification meth-ods from exploring the relatedness of publications to clustering publications intogroups. This classification system consists of three levels - macro, meso, and mi-cro levels - according to different granularity. Here we use the micro-level withabout 4,000 clusters. It should be noted that, in the released CWTS paper-levelclassification data, publications from trade journals and several local journals areexcluded, i.e., these journals cannot be evaluated. Here we try to include as manyjournals as possible, thus for these unclassified publications, we retrieve their re-lated records from WoS and put them into corresponding clusters based on theclusters of the retrieved related records using the majority rule. In total, 99%of publications reported in JCR are included for calculation and 98% of journalshaving more than 90% of their total publications are included.
In CAS Journal Ranking 2019, we follow the idea of Citation Success Index(CSI) and extend it to a field normalized version. The original CSI presented tocompare the citation capacity between two journals (Stringer et al., 2008, Milojeviet al., 2017, Shen et al., 2018), is defined as the probability of a randomly selectedpaper from one journal having more citations than a randomly selected paper fromthe other journal. Following the same idea, we propose the Field NormalizedCitation Success Index (FNCSI). The FNCSI is defined as the probability that thecitation of a paper from journal A is larger than a random paper in the same topics6nd with the same document type from other journals. For the details please referto the section below. For comparison, we also consider the Field NormalizedImpact Factor (FNIF). For journal A, the probability that the citation of a paper from journal A islarger than a random paper in the same topics and document type from otherjournals, is defined as below: S A = P ( c a > c o | a ∈ A , o ∈ O ) = (cid:88) t , d P ( A t , d ) P ( c a > c o | a ∈ A t , d , o ∈ O t , d ) (1)For a specific research topic t, its FNCSI is defined as below: S tA = N A t (cid:88) d N A t , d (cid:34) (cid:80) a ∈ A t , d , o ∈ O t , d c a > c o ) + (cid:80) a ∈ A t , d , o ∈ O t , d . c a = c o ) N A t , d N O t , d (cid:35) (2)Journal A usually invloves several research topics from the micro level of thesystem, then the total FNCSI of Journal A can be sumed from its invloved topicsas below: S A = N A (cid:88) t N A t S tA (3)where t ∈ { topic , topic , topic , .... } , d ∈ { article , review } , A t , d represents the pub-lications clustered in topic t with document type d in journal A . Field Normalized Impact Factor (FNIF) use the same classification systemas FNCSI but uses the commonly used average citation based normalization ap-proach, i.e., each citation is normalized by the average citation of papers in thesame topic cluster and with the same document type. For instance, the FNIF ofjournal A is defined as: F A = (cid:80) t , d (cid:80) a ∈ A t , d c a /µ t , d N A (4)where µ t , d is the average citation of papers in topic t with document type d . Bycomparing the results of FNCSI and FNIF, we can see the advantages of CSI.7 .3.3. Expected JIF As we mentioned earlier, for each journal, we use expected JIF as an indicatorof potential citation: E A = (cid:80) t µ t N tA N A (5)where µ t is the average citation of papers in topic t .
3. Results
In this section we present the results of CAS Journal Ranking based on FNCSIand the comparisons with other indicators. Table 1 shows the top 20 ranked jour-nals according to FNCSI. Here we only list journals mainly publishing researcharticles. The top five journals are well-acknowledged in natural and life science.The rest journals belong to different fields and not concentrate on a single fieldor narrow fields. If we take a look at the publishers of these journals, we cansee that this list is dominated by Nature-titled journals, Lancet-titled journals andCell-titled journals.The corresponding rankings based on journals’ FNIF values of these top 20journals are also presented Table 1. Among these journals, the rankings of
CancerCell , Nature Neuroscience , Cell Metabolism and
Nature Immunology are boostedmost from the FNCSI indicator, they all climb more than 20 positions. Only
Lancet Oncology shows a slight drop in position from the FNCSI indicator. Over-all, Journals from medical-related categories mostly have a relatively big gap be-tween these indicators. In Appendix Table 4 we present the top 20 journals bothfor FNCSI and FNIF.The correlation among these journal citation indicators are shown in Figure3, we can see that FNCSI and FNIF are highly correlated (spearman correlation:0.98, p-value: 0.0). In the lower part of Figure 3, we highlight several journalsthat having worse rankings in FNCSI compared with FNIF. These journals sharea common property that they each have one or several highly cited papers and amajority of poorly cited papers, e.g., Chinese Phys C has one paper cited morethan 2000 times but about 70% papers are zero cited (JCR2018, 2019).Earlier in this article, we discuss the difference of citation potential exists be-tween journals from different topics, within the Statistics & Probability category.Here in Table 2, we give the top 20 ranked journals (which mainly publishingresearch articles) according to FNCSI in this category. And to some extent, we8 able 1: Top 20 ranked journals according to FNCSI.
Journal Category-WoS FNCSI FNIF
LANCET MEDICINE, GENERAL & INTERNAL 1 3NATURE MULTIDISCIPLINARY SCIENCES 2 5JAMA MEDICINE, GENERAL & INTERNAL 3 4SCIENCE MULTIDISCIPLINARY SCIENCES 4 9CELL BIOCHEMISTRY & MOLECULAR BI-OLOGY/CELL BIOLOGY 5 15WORLD PSYCHIATRY PSYCHIATRY 6 8LANCET NEUROL CLINICAL NEUROLOGY 7 11NAT PHOTONICS OPTICS/PHYSICS, APPLIED 8 17NAT GENET GENETICS & HEREDITY 9 13NAT MED BIOCHEMISTRY & MOLECULAR BI-OLOGY/CELL BIOLOGY/MEDICINE,RESEARCH & EXPERIMENTAL 10 21NAT MATER MATERIALS SCIENCE, MULTIDIS-CIPLINARY/CHEMISTRY, PHYSI-CAL/PHYSICS, APPLIED/PHYSICS,CONDENSED MATTER 11 12LANCET ONCOL ONCOLOGY 12 10CANCER CELL ONCOLOGY/CELL BIOLOGY 13 38NAT CHEM CHEMISTRY, MULTIDISCIPLINARY 14 31NAT NEUROSCI NEUROSCIENCES 15 36CELL METAB CELL BIOLOGY/ENDOCRINOLOGY& METABOLISM 16 51LANCET RESP MED CRITICAL CAREMEDICINE/RESPIRATORY SYSTEM 17 22NAT IMMUNOL IMMUNOLOGY 18 58LANCET DIABETESENDO ENDOCRINOLOGY & METABOLISM 19 27NAT NANOTECHNOL NANOSCIENCE & NANOTECHNOL-OGY/MATERIALS SCIENCE, MULTI-DISCIPLINARY 20 23 .0 0.2 0.4 0.6 0.8 1.0 Ranking based on FNCSI R a n k i n g b a s e d o n F N I F ACTA CRYSTALLOGR BCHINESE PHYS CEPILEPSY CURRJ MATH SOCIOLKIDNEY INT SUPPL SOC SCI JPN J
Figure 3: Correlation of rankings based on FNCSI and FNIF. can find that journals perform weakly in citation potential have been revealed byFNCSI, such as several well-acknowledged journals like
Annals of Statistics , An-nals of Probability and
Biometrika . The robustness of an indicator represents its sensitivity to changes in the setof publications based on which it is calculated. A robust indicator will not changea lot against the occasional ultra-small number of highly cited publications. Tomeasure the robustness of an indicator we construct several sets of publications foreach journal with bootstrapping method and recalculate the indicator and rankingsaccordingly. For instance, for a journal with N publications, we randomly selectedN publications with replacement, calculate these indicators, and get a new rankingfor each journal. We simulate this procedure for 100 times and obtain 100 rank-ings for each journal. Figure 4(a) shows the distribution of the obtained rankingsof Chinese Physics: C. We can see that the range of ranking from FNCSI variesmuch less than FNIF. The citation distribution of Chinese Physics: C is highlyskewed, with one paper cited about two thousand times and about 70% papers not10 able 2: Top 20 ranked journals in Statistics and Probability category according to FNCSI
Journal Rank-FNCSI Rank-Expected JIF Rank-JIF
ECONOMETRICA 1 69 2J R STAT SOC B 2 45 3ANN STAT 3 63 7PROBAB THEORY REL 4 86 10ANN PROBAB 5 103 15FINANC STOCH 6 99 22J AM STAT ASSOC 7 32 4INT STAT REV 8 39 16J QUAL TECHNOL 9 104 29J STAT SOFTW 10 20 1ANN APPL PROBAB 11 60 28STOCH ENV RES RISK A 12 7 8BRIT J MATH STAT PSY 13 9 20TECHNOMETRICS 14 56 21BIOMETRIKA 15 49 33BAYESIAN ANAL 16 44 35BERNOULLI 17 92 41INSUR MATH ECON 18 65 43EXTREMES 19 58 25ECONOMET THEOR 20 91 52 NCSI FNIF
Indicators R a n k i n g s a FNCSI FNIF
Indicators R e l a t i v e C h a n g e b Figure 4: (a) Ranking variability of
Chinese Physics: C for FNCSI and FNIF. (b) Relative changeof rankings based on FNCSI and FNIF cited. Thus FNIF depends strongly on whether this highly cited are included incalculation or not.To get an overview of the indicators’ robustness, we calculate the relativechange of rankings for these indicators. The relative change of ranking is definedas: ∆ = N N (cid:88) j max { R j } − min { R j } avg { R j } (6)where { R j } is the rankings of journal j obtained from the above simulation. Asshown in Fig. 4(b), the relative change of FNCSI is smaller than FNIF implyingthat FNCSI is more robust than FNIF as FNCSI mainly focus on the central ten-dency of the citation distribution and is not easily affected by occational highlycited papers. Citation patterns are expected to vary a lot across different document types(Price, 1965). When conducting the field normalization, we also consider thedocument type, thus wrongly assigned document types will affect the journals’indicators and rankings. To test the sensitivity of indicators against wrongly la-beled document types, here we generate a virtual dataset: • for each journal, we turn its most highly cited paper to the opposite, i.e.,Article to Review or Review to Article,12 .0 0.2 0.4 0.6 0.8 1.0 Rankings of Changed R a n k i n g s o f O r i g i n a l FNIFFNCSI
Figure 5: Robustness against document type for FNCSI and FNIF. and then we recalculate the journal indicators and obtain the new rankings basedon FNCSI and FNIF respectively. The comparison of rankings based on thischanged data with the original rankings is shown in Fig. 5. We can see that almostall the orange dots (FNCSI-based) locate closely along the diagonal line whilethe blue squares(FNIF-based) spread much broader which implying that rankingsbased on FNCSI are more robust against wrongly labeled document type thanrankings based on FNIF.
4. Conclusion and Discussion
In this paper, we briefly describe the CAS Journal Ranking’s history and itspractical applications by Chinese universities and institutes in rewarding, promo-tion and research performance monitoring. We also discuss a number of limita-tions in earlier editions of the CAS Journal Ranking, and our exploration of solv-ing these problems. To better solve these problems we introduce the new indicator- Field Normalized Citation Success Index - which is used in the CAS JournalRanking 2019 upgraded version. The FNCSI extends the idea of CSI and uses afine-grained paper-level classification system to eliminate the citation differenceamong fields. We also consider the difference citation potential between articles13nd reviews in normalization. A detailed comparison between FNCSI and FNIFindicating that the ranking result obtained from FNCSI is favorable and is robustagainst extremely highly-cited publications and wrongly assigned document type.We need to point out, towards to one of the important issue that evaluatingcitation performance fairly between different research fields, some contributedwork has been done from the source-side, which is originated from Zitt & Small(2008), to solve the field normalized issue, including the source normalized impactper paper (SNIP) indicator (Moed, 2010), the revised SNIP indicator (Waltmanet al., 2013). Comparisons and discussions between the source(citing)-side ap-proach and cited-side approach have been done by Waltman & van Eck (2013b),Waltman & Eck (2013), Ruiz-Castillo (2014), and still have been inconclusive,here we refer to the overviews of these discussions provided by Waltman (2016)and Glnzel et al. (2019). We also plan to do an empirical comparison betweenthese indicators. Besides, as previously mentioned about limitations in the earliereditions of CAS journal ranking, with respect to occasionally highly-cited papers,the revised SNIP indicator has the same problem. Lehmann & Wohlrabe (2017)give an example of the journal Advances in Physics which fluctuates significantlyacross time based on SNIP and they tried to address this problem by adopting theElo rating system which takes journals’ historical performance into consideration.In addition, we have an ongoing exploration of providing journal profileswhich will provide more detailed information about journals’ covered topics andfacilitate the comparison of journals on a target topic. This journal profile modulewill be added to the CAS journal ranking in future editions.Around 1990, China started launching a reward policy to encourage Chinesescholars to join the international research community and publish papers in inter-national journals, mainly the WoS-indexed papers (Peng, 2011). Till now, Chineseinstitutions all have their own reward policy (Quan et al., 2017), and these policywhich mostly reference CAS Journal Ranking, has indeed succeeded in promotingChina’s international scientific publications in the past period. CAS journal rank-ing truly promotes understanding more about journals for Chinese policymakersand researchers. However at the same time, we are aware of the inappropriateemploy that comes along also has a negative impact as indicators’ function mayeasily be warped in practical evaluation, even becoming a driving force of re-search (Campbell, 1979, Wouters et al., 2019). We here especially notice its mis-used in evaluating individual research, like in those cash reward policies whichhave been analyzed in earlier study (Quan et al., 2017), most of them take CASjournal ranking, or other bibliometric indicators, as the golden rule instead of asa reference or supporting measures. We here call on any practice of using journal14ndicators should meet the criteria proposed by Wouters et al. (2019): • ”Justified. Journal indicators should have only a minor and explicitly de-fined role in assessing the research done by individuals or institutions (McK-iernan et al., 2019). • Contextualized. In addition to numerical statistics, indicators should reportstatistical distributions (for example, of article citation counts), as has beendone in the Journal Citation Reports since 2018 (Larivire, 2016). Differ-ences across disciplines should be considered. • Informed. Professional societies and relevant specialists should help to fos-ter literacy and knowledge about indicators. For example, a PhD trainingcourse could include a role-playing game to demonstrate the use and abuseof journal indicators in career assessment. • Responsible. All stakeholders need to be alert to how the use of indicatorsaffects the behaviour of researchers and other stakeholders. Irresponsibleuses should be called out.”Following these criteria, we, the CoS research group, will continue our orig-inal purpose for responsible research evaluation, exploring more sophisticatedmethods and indicators, constantly improving the science of CAS Journal Rank-ing.
Acknowledgements
The authors thank Dr. Nees J van Eck and CWTS for providing the paper-levelclassification data, and thank Ms. M. Zhu for valuable discussion.
Author contribution
Conceptualization: SZ, YLData Curation: CF, SZFormal analysis: SZ, TSMethodology: SZ, YL, TSWriting original draft: SZ, TSWriting review & editing: SZ, YL, SF, CFSupervision: YL 15 eferencesReferences
Campbell, D. T. (1979). Assessing the impact of planned social change.
Evalua-tion and Program Planning , , 67–90.van Eck, N. J., Waltman, L., van Raan, A. F. J., Klautz, R. J. M., & Peul, W. C.(2013). Citation analysis may severely underestimate the impact of clinicalresearch as compared to basic research. PLOS ONE , .Garfield, E. (1979). Citation Indexing - Its Theory And Application In Science,Technology, And Humanities .Glnzel, W., Moed, H. F., Schmoch, U., & Thelwall, M. (2019). Springer handbookof science and technology indicators, .JCR2018 (2019). 2018 journal impact factor, journal citation reports (clarivateanalytics, 2019), .Larivire, V. (2016). A simple proposal for the publication of journal citation dis-tributions. bioRxiv , (p. 62109).Lehmann, R., & Wohlrabe, K. (2017). Who is the ’journal grand master’? a newranking based on the elo rating system.
Journal of Informetrics , , 800–809.doi: .Leydesdorff, L. (2007). Betweenness centrality as an indicator of the interdis-ciplinarity of scientific journals. Journal of the Association for InformationScience and Technology , , 1303–1319.McKiernan, E. C., Schimanski, L. A., Nieves, C. M., Matthias, L., Niles, M. T.,& Alperin, J. P. (2019). Use of the journal impact factor in academic review,promotion, and tenure evaluations. eLife , .Milojevi, S., Radicchi, F., & Bar-Ilan, J. (2017). Citation success index an in-tuitive pair-wise journal comparison metric. Journal of Informetrics , , 223–231.Moed, H. F. (2010). Measuring contextual citation impact of scientific journals. Journal of Informetrics , , 265–277. doi: .16eng, C. (2011). Focus on quality, not just quantity. Nature , , 267–267.Quan, W., Chen, B., & Shu, F. (2017). Publish or impoverish: An investigation ofthe monetary reward system of science in china (1999-2016). Aslib Journal ofInformation Management , , 486–502.Ruiz-Castillo, J. (2014). The comparison of classification-system-based normal-ization procedures with source normalization alternatives in waltman and vaneck (2013). Journal of Informetrics , , 25–28.Shen, Z., Chen, F., Yang, L., & Wu, J. (2019). Node2vec representation for clus-tering journals and as a possible measure of diversity. Journal of Data andInformation Science , , 79–92.Shen, Z., Yang, L., & Wu, J. (2018). Lognormal distribution of citation countsis the reason for the relation between Impact Factors and Citation Success In-dex. JOURNAL OF INFORMETRICS , , 153–157. doi: {10.1016/j.joi.2017.12.007} .Stringer, M. J., Sales-Pardo, M., & Amaral, L. A. N. (2008). Effectiveness ofjournal ranking schemes as a tool for locating information. PLoS One , , e1683.Waltman, L. (2016). A review of the literature on citation impact indicators. Jour-nal of Informetrics , , 365–391. doi: .Waltman, L., & van Eck, N. J. (2012). A new methodology for constructing apublication-level classification system of science. Journal of the Associationfor Information Science and Technology , , 2378–2392.Waltman, L., & van Eck, N. J. (2013a). A smart local moving algorithm for large-scale modularity-based community detection. European Physical Journal B , , 471.Waltman, L., & Eck, N. J. (2013). Source normalized indicators of citation impact:an overview of different approaches and an empirical comparison. Scientomet-rics , , 699–716.Waltman, L., & van Eck, N. J. (2013b). A systematic empirical comparison ofdifferent approaches for normalizing citation impact indicators. Journal of In-formetrics , , 833–849. doi: .17altman, L., van Eck, N. J., van Leeuwen, T. N., & Visser, M. S. (2013). Somemodifications to the snip journal impact indicator. Journal of Informetrics , ,272–285. doi: .Wouters, P., Sugimoto, C. R., Larivire, V., McVeigh, M. E., Pulverer, B., de Ri-jcke, S., & Waltman, L. (2019). Rethinking impact factors: better ways to judgea journal. Nature , , 621–623.Zitt, M., & Small, H. (2008). Modifying the journal impact factor by fractionalcitation weighting: The audience factor. Journal of the Association for Infor-mation Science and Technology , , 1856–1860. Appendix
Appendix A. Top 20 ranked research journals
In Table 4, we list the top 20 ranked research journals based on FNCSI andFNIF respectively. Compared with the journals of selected according to FNCSI,the top four journals via FNIF are all medical-related.
Appdendix B. Additional results on robust comparison between FNCSI and FNIF
In this section, we present some additional results on the robustness of theproposed journal indicators. In Figure 4(b) we have illustrated the relative changeof rankings based on FNCSI and FNIF, here we demonstrate some further analysisand results. In Figure 6 we compare the 1st quartile and 3rd quartile rankingsobtained from the 100 simulations for each journal. The x-axis is the 1st quartileand the y-axis is the 3rd quartile. We can see for both FNCSI and FNIF, the dotsmainly located along the diagonal line implying that the rankings of most journalsare stable. When comparing the orange dots(FNCSI) and blue squared(FNIF),we can see the spreading area of orange dots is smaller than the blue squaresindicating that rankings based on FNCSI are more stable than rankings based onFNIF when dealing with some special journals.Journal indicators should also be stable across time as a journal’s reputationand quality will not change dramatically. In Figure 7 we present the evolution ofrankings based on JIF, FNIF and FNCSI for the journal
J Math Sociol . We can seethe rankings of JIF and FNIF show a big jump in the year 2018 compared withits rankings in previous years. However, the ranking of FNCSI only increases alittle. Here because of data availability, we only calculated the indicators for 2017and 2018, we will continue to monitor this journal’s performance in 2019 andforthcoming years. 18 able 3: Top 20 journals based on FNCSI and FNIF respectively.
Journal FNCSI journal FNIF
LANCET 1 CA-CANCER J CLIN 1NATURE 2 NEW ENGL J MED 2JAMA-J AM MED ASSOC 3 LANCET 3SCIENCE 4 JAMA-J AM MED ASSOC 4CELL 5 NATURE 5WORLD PSYCHIATRY 6 PSYCHOL SCI PUBL INT 6LANCET NEUROL 7 Q J ECON 7NAT PHOTONICS 8 WORLD PSYCHIATRY 8NAT GENET 9 SCIENCE 9NAT MED 10 LANCET ONCOL 10NAT MATER 11 LANCET NEUROL 11LANCET ONCOL 12 NAT MATER 12CANCER CELL 13 NAT GENET 13NAT CHEM 14 PSYCHOL BULL 14NAT NEUROSCI 15 CELL 15CELL METAB 16 NAT ENERGY 16LANCET RESP MED 17 NAT PHOTONICS 17NAT IMMUNOL 18 CIRCULATION 18LANCET DIABETES ENDO 19 FUNGAL DIVERS 19NAT NANOTECHNOL 20 LANCET INFECT DIS 2019 .0 0.2 0.4 0.6 0.8 1.0
Rankings of 1st quartile R a n k i n g s o f r d q u a r t il e FNIFFNCSI
Figure 6: Change of rankings based on FNCSI and FNIF.
Year P e r c e n t il e r a n k i n c a t e g o r y FNCSIFNIFJCR JIF
Figure 7: Evolution of percentile rank for
J Math Sociol based on different indicators. The per-centile ranking is calculated within the
Mathematics, Interdisciplinary applications category.category.