Tindaro Cicero
University of Rome Tor Vergata
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Tindaro Cicero.
Journal of Informetrics | 2012
Giovanni Abramo; Tindaro Cicero; Ciriaco Andrea D’Angelo
Over the past decade, national research evaluation exercises, traditionally conducted using the peer review method, have begun opening to bibliometric indicators. The citations received by a publication are assumed as proxy for its quality, but they require standardization prior to use in comparative evaluation of organizations or individual scientists: the citation data must be standardized, due to the varying citation behavior across research fields. The objective of this paper is to compare the effectiveness of the different methods of normalizing citations, in order to provide useful indications to research assessment practitioners. Simulating a typical national research assessment exercise, he analysis is conducted for all subject categories in the hard sciences and is based on the Thomson Reuters Science Citation Index-Expanded®. Comparisons show that the citations average is the most effective scaling parameter, when the average is based only on the publications actually cited.
Journal of Informetrics | 2011
Giovanni Abramo; Tindaro Cicero; Ciriaco Andrea D’Angelo
With the passage of more time from the original date of publication, the measure of the impact of scientific works using subsequent citation counts becomes more accurate. However the measurement of individual and organizational research productivity should ideally refer to a period with closing date just prior to the evaluation exercise. Therefore it is necessary to compromise between accuracy and timeliness. This work attempts to provide an order of magnitude for the error in measurement that occurs with decreasing the time lapse between date of publication and citation count. The analysis is conducted by scientific discipline on the basis of publications indexed in the Thomson Reuters Italian National Citation Report.
Journal of Informetrics | 2011
Giovanni Abramo; Tindaro Cicero; Ciriaco Andrea D’Angelo
The current work proposes an application of DEA methodology for measurement of technical and allocative efficiency of university research activity. The analysis is based on bibliometric data from the Italian university system for the five-year period 2004–2008. Technical and allocative efficiency is measured with input being considered as a universitys research staff, classified according to academic rank, and with output considered as the field-standardized impact of the research product realized by these staff. The analysis is applied to all scientific disciplines of the so-called hard sciences, and conducted at subfield level, thus at a greater level of detail than ever before achieved in national-scale research assessments.
Journal of Informetrics | 2012
Giovanni Abramo; Tindaro Cicero; Ciriaco Andrea D’Angelo
Higher education systems in competitive environments generally present top universities, that are able to attract top scientists, top students and public and private financing, with notable socio-economic benefits in their region. The same does not hold true for non-competitive systems. In this study we will measure the dispersion of research performance within and between universities in the Italian university system, typically non-competitive. We will also investigate the level of correlation that occurs between performance in research and its dispersion in universities. The findings may represent a first benchmark for similar studies in other nations. Furthermore, they lead to policy indications, questioning the effectiveness of selective funding of universities based on national research assessment exercises. The field of observation is composed of all Italian universities active in the hard sciences. Research performance will be evaluated using a bibliometric approach, through publications indexed in the Web of Science between 2004 and 2008.
Higher Education | 2012
Giovanni Abramo; Tindaro Cicero; Ciriaco Andrea D’Angelo
The potential occurrence of variable returns to size in research activity is a factor to be considered in choices about the size of research organizations and also in the planning of national research assessment exercises, so as to avoid favoring those organizations that would benefit from such occurrence. The aim of the current work is to improve on weaknesses in past inquiries concerning returns to size through application of a research productivity measurement methodology that is more accurate and robust. The method involves field-standardized measurements that are free of the typical distortions of aggregate measurement by discipline or organization. The analysis is conducted for 183 hard science fields in all 77 Italian universities (time period 2004–2008) and allows detection of potential differences by field.
Journal of Informetrics | 2014
Giovanni Abramo; Tindaro Cicero; Ciriaco Andrea D’Angelo
Ever more frequently, governments have decided to implement policy measures intended to foster and reward excellence in scientific research. This is in fact the intended purpose of national research assessment exercises. These are typically based on the analysis of the quality of the best research products; however, a different approach to analysis and intervention is based on the measure of productivity of the individual scientists, meaning the overall impact of their entire scientific production over the period under observation. This work analyzes the convergence of the two approaches, asking if and to what measure the most productive scientists achieve highly cited articles; or vice versa, what share of highly cited articles is achieved by scientists that are “non-top” for productivity. To do this we use bibliometric indicators, applied to the 2004–2008 publications authored by academics of Italian universities and indexed in the Web of Science.
Journal of Informetrics | 2013
Giovanni Abramo; Tindaro Cicero; Ciriaco Andrea D’Angelo
Unlike competitive higher education systems, non-competitive systems show relatively uniform distributions of top researchers and low performers among universities. In this study, we examine the impact of unproductive and top faculty members on overall research performance of the university they belong to. Furthermore, we analyze the potential relationship between research productivity of a university and the indexes of concentration of unproductive and top researchers. Research performance is evaluated using a bibliometric approach, through publications indexed on the Web of Science between 2004 and 2008. The set analyzed consists of all Italian universities active in the hard sciences.
Scientometrics | 2013
Giovanni Abramo; Tindaro Cicero; Ciriaco Andrea D'Angelo
There has been ample demonstration that bibliometrics is superior to peer-review for national research assessment exercises in the hard sciences. In this paper we examine the Italian case, taking the 2001–2003 university performance rankings list based on bibliometrics as benchmark. We compare the accuracy of the first national evaluation exercise, conducted entirely by peer-review, to other rankings lists prepared at zero cost, based on indicators indirectly linked to performance or available on the Internet. The results show that, for the hard sciences, the costs of conducting the Italian evaluation of research institutions could have been completely avoided.
Journal of Informetrics | 2013
Giovanni Abramo; Tindaro Cicero; Ciriaco Andrea D’Angelo
The evaluation of performance at the individual level is of fundamental importance in informing management decisions. The literature provides various indicators and types of measures, however a problem that is still unresolved and little addressed is how to compare the performance of researchers working in different fields (apples to oranges). In this work we propose a solution, testing various scaling factors for the distributions of research productivity in 174 scientific fields. The analysis is based on the observation of scientific production by all Italian university researchers active in the hard sciences over the period 2004–2008, as indexed by the Web of Science. The most effective scaling factor is the average of the productivity distribution of researchers with productivity above zero.
Journal of Informetrics | 2015
Giovanni Abramo; Tindaro Cicero; Ciriaco Andrea D’Angelo
The literature on gender differences in research performance seems to suggest a gap between men and women, where the former outperform the latter. Whether one agrees with the different factors proposed to explain the phenomenon, it is worthwhile to verify if comparing the performance within each gender, rather than without distinction, gives significantly different ranking lists. If there were some structural factor that determined a penalty in performance of female researchers compared to their male peers, then under conditions of equal capacities of men and women, any comparative evaluations of individual performance that fail to account for gender differences would lead to distortion of the judgments in favor of men. In this work we measure the extent of differences in rank between the two methods of comparing performance in each field of the hard sciences: for professors in the Italian university system, we compare the distributions of research performance for men and women and subsequently the ranking lists with and without distinction by gender. The results are of interest for the optimization of efficient selection in formulation of recruitment, career advancement and incentive schemes.