Paul Wouters
Leiden University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Paul Wouters.
Nature | 2015
Diana Hicks; Paul Wouters; Ludo Waltman; Sarah de Rijcke; Ismael Rafols
Nutzen Sie diese zehn Grundsatze um Forschung zu bewerten, drangen Diana Hicks, Paul Wouters und Kollegen.
Journal of the Association for Information Science and Technology | 2012
Ludo Waltman; Clara Calero-Medina; Joost Kosten; Ed C. M. Noyons; Robert J. W. Tijssen; Nees Jan van Eck; Thed N. van Leeuwen; Anthony F. J. van Raan; Martijn S. Visser; Paul Wouters
The Leiden Ranking 2011/2012 is a ranking of universities based on bibliometric indicators of publication output, citation impact, and scientific collaboration. The ranking includes 500 major universities from 41 different countries. This paper provides an extensive discussion of the Leiden Ranking 2011/2012. The ranking is compared with other global university rankings, in particular the Academic Ranking of World Universities (commonly known as the Shanghai Ranking) and the Times Higher Education World University Rankings. The comparison focuses on the methodological choices underlying the different rankings. Also, a detailed description is offered of the data collection methodology of the Leiden Ranking 2011/2012 and of the indicators used in the ranking. Various innovations in the Leiden Ranking 2011/2012 are presented. These innovations include (1) an indicator based on counting a universitys highly cited publications, (2) indicators based on fractional rather than full counting of collaborative publications, (3) the possibility of excluding non-English language publications, and (4) the use of stability intervals. Finally, some comments are made on the interpretation of the ranking and a number of limitations of the ranking are pointed out.
Journal of the Association for Information Science and Technology | 2015
Rodrigo Costas; Zohreh Zahedi; Paul Wouters
An extensive analysis of the presence of different altmetric indicators provided by Altmetric.com across scientific fields is presented, particularly focusing on their relationship with citations. Our results confirm that the presence and density of social media altmetric counts are still very low and not very frequent among scientific publications, with 15%–24% of the publications presenting some altmetric activity and concentrated on the most recent publications, although their presence is increasing over time. Publications from the social sciences, humanities, and the medical and life sciences show the highest presence of altmetrics, indicating their potential value and interest for these fields. The analysis of the relationships between altmetrics and citations confirms previous claims of positive correlations but is relatively weak, thus supporting the idea that altmetrics do not reflect the same kind of impact as citations. Also, altmetric counts do not always present a better filtering of highly‐cited publications than journal citation scores. Altmetric scores (particularly mentions in blogs) are able to identify highly‐cited publications with higher levels of precision than journal citation scores (JCS), but they have a lower level of recall. The value of altmetrics as a complementary tool of citation analysis is highlighted, although more research is suggested to disentangle the potential meaning and value of altmetric indicators for research evaluation.
Scientometrics | 1994
Paul Wouters; Loet Leydesdorff
At the occasion of the completion of the 25th volume ofScientometrics, we present a combined bibliometric and social network analysis of this journal. In more than one respect,Scientometrics displays the characteristics of a social science journal. Its Price Index amounts to 43.0 percent, and is remarkably stable over time. The majority of the published items inScientometrics has been written by a single author. Moreover, the network of co-authorships is highly fragmented: most authors cooperate with no more than one or two colleagues. Both the citation networks of the authors and the network of title words indicate that the field is nonetheless highly cohesive. In this sense, a specific identity seems to have developed, indeed. Some indications concerning the character of this identity are discussed.
Nature Reviews Neuroscience | 2012
Clement Levallois; John A. Clithero; Paul Wouters; Ale Smidts; Scott A. Huettel
The social and neural sciences share a common interest in understanding the mechanisms that underlie human behaviour. However, interactions between neuroscience and social science disciplines remain strikingly narrow and tenuous. We illustrate the scope and challenges for such interactions using the paradigmatic example of neuroeconomics. Using quantitative analyses of both its scientific literature and the social networks in its intellectual community, we show that neuroeconomics now reflects a true disciplinary integration, such that research topics and scientific communities with interdisciplinary span exert greater influence on the field. However, our analyses also reveal key structural and intellectual challenges in balancing the goals of neuroscience with those of the social sciences. To address these challenges, we offer a set of prescriptive recommendations for directing future research in neuroeconomics.
Scientometrics | 1999
Paul Wouters
A recurring theme in the use of science and technology indicators, as well as in the construction of new ones, is the interpretation of these indicators. Given the dependence on citation data in the majority of interesting science and technology indicators, a general citation theory would make the meaning of S&T indicators more transparent. Hence the continuing call for a citation theory in scientometrics. So far, such a theory has not yet been accepted by the experts in the field. This paper suggests an explanation for this. It also tries to sketch the outline of a generalindicator theory by discussing new implications of an earlier proposal (Wouters, 1998) in relation to existing citation and indicator theories.
Scientometrics | 2016
Loet Leydesdorff; Paul Wouters; Lutz Bornmann
Bibliometric indicators such as journal impact factors, h-indices, and total citation counts are algorithmic artifacts that can be used in research evaluation and management. These artifacts have no meaning by themselves, but receive their meaning from attributions in institutional practices. We distinguish four main stakeholders in these practices: (1) producers of bibliometric data and indicators; (2) bibliometricians who develop and test indicators; (3) research managers who apply the indicators; and (4) the scientists being evaluated with potentially competing career interests. These different positions may lead to different and sometimes conflicting perspectives on the meaning and value of the indicators. The indicators can thus be considered as boundary objects which are socially constructed in translations among these perspectives. This paper proposes an analytical clarification by listing an informed set of (sometimes unsolved) problems in bibliometrics which can also shed light on the tension between simple but invalid indicators that are widely used (e.g., the h-index) and more sophisticated indicators that are not used or cannot be used in evaluation practices because they are not transparent for users, cannot be calculated, or are difficult to interpret.
Journal of Informetrics | 2013
Ludo Waltman; Nees Jan van Eck; Paul Wouters
Is more always better? We address this question in the context of bibliometric indices that aim to assess the scientific impact of individual researchers by counting their number of highly cited publications. We propose a simple model in which the number of citations of a publication depends not only on the scientific impact of the publication but also on other ‘random’ factors. Our model indicates that more need not always be better. It turns out that the most influential researchers may have a systematically lower performance, in terms of highly cited publications, than some of their less influential colleagues. The model also suggests an improved way of counting highly cited publications.
aslib journal of information management | 2015
Rodrigo Costas; Zohreh Zahedi; Paul Wouters
Purpose – The purpose of this paper is to analyze the disciplinary orientation of scientific publications that were mentioned on different social media platforms, focussing on their differences and similarities with citation counts. Design/methodology/approach – Social media metrics and readership counts, associated with 500,216 publications and their citation data from the Web of Science database, were collected from Altmetric.com and Mendeley. Results are presented through descriptive statistical analyses together with science maps generated with VOSviewer. Findings – The results confirm Mendeley as the most prevalent social media source with similar characteristics to citations in their distribution across fields and their density in average values per publication. The humanities, natural sciences, and engineering disciplines have a much lower presence of social media metrics. Twitter has a stronger focus on general medicine and social sciences. Other sources (blog, Facebook, Google+, and news media me...
PLOS Biology | 2016
John P. A. Ioannidis; Kevin W. Boyack; Paul Wouters
Citation metrics are increasingly used to appraise published research. One challenge is whether and how to normalize these metrics to account for differences across scientific fields, age (year of publication), type of document, database coverage, and other factors. We discuss the pros and cons for normalizations using different approaches. Additional challenges emerge when citation metrics need to be combined across multiple papers to appraise the corpus of scientists, institutions, journals, or countries, as well as when trying to attribute credit in multiauthored papers. Different citation metrics may offer complementary insights, but one should carefully consider the assumptions that underlie their calculation.