A New Ranking Scheme for the Institutional Scientific Performance
aa r X i v : . [ a s t r o - ph . I M ] A ug A New Ranking Scheme for the InstitutionalScientific Performance
S. Bilir a , ∗ , E. G¨o˘g¨u¸s b , ¨O. ¨Onal Ta¸s c , and T. Yontan c a Istanbul University, Faculty of Science, Department of Astronomy and SpaceSciences, 34119, Istanbul, Turkey b Sabancı University, Faculty of Engineering and Natural Sciences, 34956,Orhanlı-Tuzla, Istanbul, Turkey c Istanbul University, Graduate School of Science and Engineering, Department ofAstronomy and Space Sciences, 34116, Istanbul, Turkey
Abstract
We propose a new performance indicator to evaluate the productivity of research in-stitutions by their disseminated scientific papers. The new quality measure includestwo principle components: the normalized impact factor of the journal in whichpaper was published, and the number of citations received per year since it waspublished. In both components, the scientific impacts are weighted by the contribu-tion of authors from the evaluated institution. As a whole, our new metric, namely,the institutional performance score takes into account both journal based impactand articles specific impacts. We apply this new scheme to evaluate research outputperformance of Turkish institutions specialized in astronomy and astrophysics in theperiod of 1998-2012. We discuss the implications of the new metric, and emphasizethe benefits of it along with comparison to other proposed institutional performanceindicators.
Key words:
Astronomy & Astrophysics; Research performance; Bibliometrics;Statistical analysis
A natural extension of evaluating the research performance of individual scien-tists is to evaluate the research output productivity of research institutes. This ∗ Corresponding author. Tel.: +90-212-440 00 00-10534
Email address: [email protected] (S. Bilir).
Preprint submitted to Journal of Scientometric Research 10 September 2018 s, however, a more challenging task than assessing the output records of anindividual scientist for various reasons. First of all, the number of scientists af-filiated varies remarkably between institutes. This is easily handled in the eval-uation of the research performance of an institute by normalizing the researchoutputs with the number of affiliated researchers. Another important factoris the impact of the research output. At this front, the h -index (Hirsch, 2005)and some of its variants (Braun et al., 2006; Egghe, 2006; Van Raan, 2006;Jin et al., 2007; Guan & Gao, 2008; Vanclay, 2008; Schreiber et al., 2011) areusually employed.Along with the wider use of advanced technology and methodologies in sci-entific research, the nature of research teams is also evolving. Unlike a fewdecades ago, scientific investigations performed by teams of about 10 scien-tists or more are not uncommon. The size of research teams in some cases canbe as large as hundreds, such as, the Large Hadron Collider collaboration atCERN , which includes scientists affiliated with many different institutions.In the dissemination of these scientific efforts (most commonly in the form ofresearch articles), the contribution of each team member (that is, co-author) isnot usually reported explicitly. Therefore, it would not be a fair evaluation ofthe respective institutions when these large collaboration articles are assessedwithout author contributions are taken into considerations. To account forauthorship credit, various ways were proposed, such as, the harmonic authorcredit (Hagen, 2008) and the i th author credit (Liu & Fang, 2012), both ofwhich credits the author based on the rank in the author list, or the fractionalauthor credit (Liu & Fang, 2012) which credits all authors equally.There have been numerous extensive studies for the scientific productivityevaluations of research institutions. Vieira & Gomes (2010) investigated re-search impact for scientific institutions using an indicator that includes thepaper productivity as well as their citation performance. Batista et al. (2006)proposed a measure that is interrelated to the h -index: They introduced h I which is the ratio of the square of h -index of the institutional papers to thenumber of authors of these articles. Abramo et al. (2013) derived an indica-tor which is obtained by normalizing the institutional h -index with the num-ber of full time research personnel of the institute. Franceschini & Maisano(2010) suggested a structured technique to evaluate scientific output of re-search groups, in which they employ h -index as the key ingredient. Recently,Franceschini et al. (2013) proposed the success index for evaluating researchinstitutions which primarily takes into account institutional papers with greatercitation records. Boell & Wilson (2010) proposed a ranking scheme based onthe square of the journal impact factors. Note the fact that these performanceindicators do not completely involve the effects of all the above-mentioned fac-tors, in particular the author contribution to the impact of scientific output. http://lhcb-public.web.cern.ch/lhcb-public/ In order to obtain the complete dataset for astronomy and astrophysics re-search papers, we used Thomson Reuters Web of Knowledge , which includes12 different databases of single and interdisciplinary citation indices. Thisdatabase contains the list of all journals covered in Science Citation Index(SCI) and provides the citation counts without self citations for individualpapers since 1980 to present day. In the database, we have identified 1702publications in “Astronomy and Astrophysics” whose authors or co-authorswere based in Turkey and published in 56 SCI journals in the period from 1980to 2012. After excluding papers with overlapping fields, such as physics par-ticles fields, geosciences multidisciplinary, meteorology atmospheric sciences,engineering aerospace, geochemistry geophysics, mathematics interdisciplinaryapplications and remote sensing, the total number of publications was reducedto 1062. According to document types, these 1062 papers were divided intoseven groups: articles (976), proceedings (37), letters (16), reviews (15), errata(10), research notes (7) and editorial notes (1). It was also found that 37 ofthese studies were presented at meetings before they were published, 10 ofthem were corrected then re-published. Due to these reasons, we only consid-ered the articles, letters, reviews, and editorial notes, which resulted in samplesize of 1015 publications. In Fig. 1, we present the distribution of these 1015publications over time. Note that 782 of these papers had the leading authorfrom Turkish institutions, while in 233 papers; the leading authors were frominternational institutions.It can be seen from Fig. 1 that there is a prominent increment in number ofpublications that were published in SCI journals starting from about 2000.Bilir et al. (2013) suggested that this increase in scientific productivity wasmotivated by the application of the improved academic assignment criteria in1998, common use of Internet, and larger scale research opportunities providedby scientific research units of the universities. Therefore, in our study here we http://apps.webofknowledge.com ig. 1. Distribution of research papers in astronomy and astrophysics which werepublished in SCI journals from 1980 to 2012. Green histograms represent the samedistribution for the papers with the leading authors from institutions in Turkey. analyzed research papers that published between the years of 1998 and theend of 2012. We should note the important fact that the number of citationsfor each publication was determined as of 31 August 2013. This way, even thelatest publications had about one year of visibility, since papers are usuallypublished in SCI journals can take a couple of months before they are listedon Web of Knowledge.Astronomy and astrophysics studies in Turkey are currently conducted in threemain departments such as Astronomy and Space Sciences in Ankara, Ege,Erciyes and ˙Istanbul universities; Astronomy and Space Technologies in Ak-deniz, C¸ anakkale Onsekiz Mart (COMU) universities and Physics in Bo˘gazi¸ci(BOUN), C¸ ukurova, ˙Istanbul K¨ult¨ur, Middle East Technical (METU) andSabancı universities.Note that astrophysical research has also been conducted at various sub divi-sions of the Turkish Scientific and Technological Research Council (T ¨UB˙ITAK) .We have identified 749 papers that researchers based in Turkey have been lead-ing authors or co-authors and have been published in SCI journals between1998 and 2012. Researchers from 48 institutions contributed to these 749 pub-lications. When we consider the papers with leading authors based in Turkey,the number of papers published within the same time frame reduces to 564,which were contributed by researchers from 37 institutions inside Turkey. Theory/calculation
As we have outlined in the first section, there is currently no quality indi-cator to rank scientific productivity of research institutions that takes intoaccount scientific impact and author contribution. We introduce below ourarticle productivity based new ranking scheme, which we call the institutionalperformance score (
IPS ), which consists of two additive terms: ( i ) impact fac-tor ( IF ) of journal for the year that the article has been published multipliedwith the contribution of each co-author ( AC ) to the institutional article, and( ii ) the ratio of the number of citations received ( n citations ) to the number ofyears passed since the paper has been published ( n years ), and also multipliedby AC . IP S = 1 N N X i =1 ( IF ) i + n citations,i n years,i ! × ( AC ) i , (1)where N is the total number of institutional articles published. In this scheme,the AC parameter is simply the ratio of the number of co-authors from aparticular institute to the total number of co-authors. For example, if an articleis published by five researchers; three of them are from Institute A and two ofthem from Institute B, then the author contribution of this paper to InstituteA is 3/5 and that to Institute B is 2/5. The latter term represents the scientificimpact of an article, which diminishes over time if it is not cited at a steadypace. Effectively, this indicator combines author contribution added impactgained by the journal in which a particular article was published, and by thearticle itself. We apply our proposed institutional performance indicator to the Turkish in-stitutions performing research in astronomy and astrophysics, and disseminatetheir outputs in the form of scientific articles. Note that the impact factor ofthe journal in the year that a paper is published is one of the essential inputsfor our new performance indicator definition. For this purpose, the impact fac-tors for the nine mostly preferred SCI journals between 1998 and 2012 werecompiled and presented in Table 1. In the bottom row of Table 1, we providethe 15-year averages of annual impact factors for each of these nine journals.5 able 1Impact factors of SCI journals in astronomy and astrophysics between 1998 and2012.Years MNRAS A&A ApJ NewA IJMPD AN Ap&SS PASA AJ1998 3.960 1.630 1.953 2.912 0.732 0.518 0.234 0.419 2.0031999 4.548 2.252 2.543 2.947 1.064 0.600 0.275 0.868 2.8762000 4.685 2.790 2.822 2.241 1.051 0.410 1.189 1.028 3.6042001 4.681 2.281 5.921 2.348 1.242 0.553 0.274 0.951 3.0182002 4.671 3.781 6.187 3.108 1.507 0.786 0.383 0.898 5.1192003 4.993 3.843 6.604 3.866 1.618 1.199 0.522 1.057 5.6472004 5.238 3.694 6.237 2.171 1.500 0.906 0.597 1.158 5.8412005 5.352 4.223 6.308 1.921 1.225 0.871 0.495 1.735 5.3772006 5.057 3.971 6.119 2.220 1.651 1.399 0.771 1.588 4.8542007 5.249 4.259 6.405 1.714 1.870 1.461 0.834 1.390 5.0192008 5.185 4.153 6.331 1.784 1.741 1.261 1.283 2.564 4.7692009 5.103 4.179 7.364 1.675 1.046 1.186 1.404 3.786 4.4812010 4.888 4.425 6.063 1.632 1.109 0.842 1.437 1.590 4.5552011 4.900 4.587 6.024 1.411 1.183 1.012 1.686 2.259 6.0242012 5.521 5.084 6.733 1.850 1.030 1.399 2.064 3.120 4.965Average 4.935 3.677 5.574 2.253 1.305 0.960 0.897 1.627 4.543
Note: (MNRAS) Montly Notices of the Royal Astronomical Society; (A&A)Astronomy and Astrophysics, (ApJ) Astrophysical Journal, (NewA) New Astron-omy, (IJMPD) International Journal of Modern Physics D, (AN) AstronomischeNachrichten, (Ap&SS) Astrophysics & Space Science, (PASA) Publications of theAstronomical Society of Australia, (AJ) Astronomical Journal.
We present in Table 2, the resulting institutional performance scores (
IPS )of nine leading Turkish institutions, along with their number of publications,each component of the
IPS , namely IF × AC (represented with 1 (cid:13) in Table2) and ( n citations /n years ) × AC ( 2 (cid:13) in Table 2), as well as their institutional h - and the other indices. The table is formed in such a way that the upperpart is for all 749 publications, and the lower part is formed by considering564 publications whose leading author reside in Turkey. The institutes in bothportions of Table 2 are ranked according to their IPS values.We find that Sabancı University appears on top of the list in both publicationcategories, followed by Ege, which produced the largest number of institu-tional publications in our sample. It is noteworthy that Ege University wasfounded in 1962 while Sabancı University in 1999 and the average numberof researchers in Ege have been much larger than that in Sabancı University.Our proposed performance indicator is not biased by such contrasts since wenormalize the total quantities by the number of papers published. With re-spect to the individual impact ( 2 (cid:13) in Table 2), Sabancı again earns the first6 able 2The list of nine leading institutions with their number of papers ( N ), total authorcontribution corrected journal impact ( 1 (cid:13) ), author contribution corrected individualimpact ( 2 (cid:13) ), the ratio of 2 (cid:13) to 1 (cid:13) , the IPS for all publications (upper portion) andfor the publications with leading authors based in Turkey (lower portion). We alsolist few other performance indicators, namely h -index, g -index, AR -index, and IF .Rank Institution N (cid:13) (cid:13) (cid:13) / (cid:13) IPS h -index g -index AR -index IF N (cid:13) (cid:13) (cid:13) / (cid:13) IPS h -index g -index AR -index IF rank, followed by Ege and ˙Istanbul. Another important ranking tool here isthe ratio of 2 (cid:13) to 1 (cid:13) , that is the fraction of the collective impact of scientificpapers within the collective impact they gained by their respective journals.In this scheme, Sabancı leads, and closely followed by ˙Istanbul University.Sabancı University ranks on top also in other performance indicators. It isstriking to note in Table 2 that METU, ˙Istanbul and COMU the second, thirdand fourth places, respectively, in their respective h -index (Hirsch, 2005), g -index (Egghe, 2006), AR -index (Jin et al., 2007), and IF (Boell & Wilson,2010) rankings. In the IPS ranking, METU, ˙Istanbul and COMU rank sixth,fifth and fourth, respectively.We present in Fig. 2, the average author contribution ( AC ) for each institutionin both publication groups. We find that the average author contributionratios from Turkish institutions to all publications vary between 0.34 and0.70. For all 749 publications, Turkey resident author contribution to researchpapers mostly comes from BOUN, Ege and COMU (Fig. 2a). There are fourinstitutions that were found to pass the author contribution ratios of 0.50,while Sabancı University remains below this proportion, even though with thehighest citation value received for research papers. The author contributionratios for publications with domestic leading authors varied between 0.50 and0.87. In this category, BOUN and Ege earn the first place, followed by COMUwith 0.84 and Ankara University with 0.74. Also in Fig. 2, we present the7 ig. 2. Average author contributions, citations per papers and h -index of institutionsfor all papers (a) and for papers with domestic leading authors (b). number of citations per papers and institutional h -indices of all nine Turkishinstitutions. When citations per papers were considered, Sabancı Universityleads in both publication categories with 16.36 and 14.53 citations per paper,respectively. It is followed by METU (11.74%) and ˙Istanbul (10.78%) for allpublications and ˙Istanbul (9.37%) and METU (7.76%) for publications withdomestic leading authors.Finally, we construct time evolution of IPS values of Turkish astronomy andastrophysics related research institutions with exceeding 100 publications. Wealso calculate h -index, g -index, AR -index, and IF for these five institutions8 ig. 3. Evolution of IPS , h -index, g -index, AR -index, and IF for five institutionswith more than 100 publications. to compare with our proposed performance indicator. As seen in Fig. 3, theannual IPS of Sabancı University is mostly in the 4-8 band over the courseof our study from 1998 to 2012. Note the fact that Sabancı is a newly estab-lished institution and astrophysical research started in 1999. Ege, METU and˙Istanbul Universities lie around
IPS s of 4. It is noteworthy that Ege exhibits agradual increase trend until 2004. The
IPS trends of other institutions appearbetween 2 and 4. It is important to note that h -index, g -index, AR -index, and IF exhibit cumulative evolution in time, while the IPS can evolve positiveor negatively, depending on the scientific impact of research units.9
Discussion
We introduced a new quantitative indicator to evaluate scientific performanceof research institutions. Our proposed indicator consists of two crucial com-ponents: one is the author contribution weighted impact factor of the journalin which a paper has been published, and the other one is again author con-tribution corrected the number of citations received by the paper per eachyear since its appearance in the journal. In other words, the
IPS value can beregarded as institutional scientific impact of a research unit.In the era of very high-speed communications and rather easy access to high-performance computation, scientists of today are greatly benefiting from thefact that geopolitical borders are no longer boundaries for scientific collabora-tions. As reaching out for international collaborations gets easier, the sizes ofinternational research teams become eventually larger. When it comes to ex-tensively large experimental efforts, such as, the Large Hadron Collider projectat CERN, the size of collaborations can be as large as thousands of researchersfrom hundreds of different institutions. Therefore, it would not be trivial toassess the outcome of their collaborative effort (peer-reviewed papers) to aparticular institution only. For this reason, we include the ratio of the numberof co-authors from a particular institution to the total number of co-authorsas a multiplicative weight for the impact factor of the journal in which aparticular paper has been published.The journal, and its associated impact factor cannot provide a direct measurefor the quality of a research topic. Some articles might end up in a journalwith no page charge but has a low impact factor due to the lack of fundingfor publication costs. Nevertheless, there are, fortunately, numerous journalswhich require no publication charges but have high impact factors, such asMNRAS in the field of astronomy and astrophysics. When folded with the ratioof contributing authors, the impact factor becomes a more sensitive qualityindicator of a research paper.Another important achievement indicator of a scientific paper is the numberof citations received. It is unavoidable that a paper takes some time for itsvisibility before it is being referred by peer-researchers. As years pass by, itwill be eligible for further referral. In our parameterization, we consider thecitation based impact of a paper per the number of years passed so that theoutcome is balanced for newly published papers, as well as those published awhile ago and had already ample periods of time for their visibility.We apply our new performance indicator scheme to the outputs of Turkishinstitution specialized in astronomy and astrophysics. We clearly find thatcommonly used h -index or its variants suggest slightly different rankings for10he same sample since they involve primarily citations received by papers. Thisapproach underestimates the performance of an institution which producedmodest number of highly cited papers. As we showed in Table 2, h -index, g -index, AR -index, and IF based ranking closely resemble each other. Onthe other hand, the IPS ranking is significantly different. Another importantproperty of the use of
IPS is that it can grow or decay, depending on thescientific performance of research institutes. Whereas, the other four indicatorscompared here evolve in time cumulatively.We also investigated other proposed performance indicators (such as Batista et al.,2006; Vieira & Gomes, 2010; Abramo et al., 2013; Franceschini et al., 2013).The indicators proposed by Abramo et al. (2013) requires the number of fulltime equivalent staff of research institutions, which is, in most cases, not easyto obtain for the institutions other than the home institution of a researcher.The methods proposed by Batista et al. (2006) and Vieira & Gomes (2010)differ from pure h -index analysis, but still heavily based upon h -index param-eters. Our proposed scheme, on the other hand, makes use of easily availableinput parameters, which can be extracted from various commonly used chan-nels, such as, Web of Knowledge.Finally, it is important to note that our proposed performance indicator canalso be adopted to evaluate scientific output of an individual researcher. Forthis purpose, the weighting parameter of the journal impact factor (i.e. theauthor contribution ratio) is simply replaced with the reciprocal of the numberof co-authors. When summed over all publication of researchers, this wouldprovide a more sensitive comparison tool for personal evaluations. References
Abramo, G., D’Angelo, C. A., & Viel, F. (2013). The suitability of h and g indexes for measuring the research performance of institutions. Scientomet-rics , 97, 555Batista, P. D., Campiteli, M. G, Kinouchi, O., & Martinez, A. S. (2006). Is itpossible to compare researchers with different scientific interests?
Sciento-metrics , 68, 179Bilir, S., G¨o˘g¨u¸s, E., ¨Onal, ¨O., ¨Ozt¨urkmen, N. D., & Yontan, T. (2013). Re-search performance of Turkish astronomers in the period of 1980-2010.
Sci-entometrics , 97, 477Boell, S. K., & Wilson, C. S. (2010). Journal impact factors for evaluatingscientific performance: use of h -like indicators. Scientometrics , 82, 613Braun, T., Glanzel, W., & Schubert, A. (2006). A Hirsch-type index for jour-nals.
Scientometrics , 69, 169Egghe, L. (2006). Theory and practise of the g -index. Scientometrics , 69, 13111ranceschini, F., & Maisano, D. (2010). Analysis of the Hirsch index’s opera-tional properties.
European Journal of Operational Research , 203, 494Franceschini, F., Maisano, D., & Mastrogiacomo, L. (2013). Evaluating re-search institutions: the potential of the success -index.
Scientometrics , 96,85Guan, J., & Gao, X. (2008). Comparison and evaluation of Chinese researchperformance in the field of bioinformatics.
Scientometrics , 75, 357Hagen, N. T. (2008). Harmonic allocation of authorship credit: Source-levelcorrection of bibliometric bias assures accurate publication and citationanalysis.
PLoS ONE , 3(12), e4021Hirsch, J. E. (2005). An index to quantify an individual’s scientific researchoutput.
Proceedings of the National Academy of Sciences of the UnitedStates of America , 102, 16569Jin, B., Liang L., Rousseau R., & Egghe, L. (2007). The R - and AR -indices:Complementing the h -index. Chinese Science Bulletin , 52, 855Liu, X. Z., & Fang, H. (2012). Fairly sharing the credit of multi-authoredpapers and its application in the modification of h -index and g -index. Sci-entometrics , 91, 37Schreiber, M., Malesios, C. C., & Psarakis, S. (2011). Categorizing h -indexvariants. Research Evaluation , 21, 397Vanclay, J. (2008). Ranking forestry journals using the h -index. Journal ofInformetrics , 2, 326, (2008)Van Raan, A. F. J. (2006). Comparison of the Hirsc h -index with standardbibliometric indicators and with peer judgment for 147 chemistry researchgroups, Scientometrics , 67, 491Vieira, E. S., & Gomes, J. A. N. F. (2010). A research impact indicator forinstitutions.