Hans-Dieter Daniel
University of Zurich
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hans-Dieter Daniel.
Journal of Documentation | 2008
Lutz Bornmann; Hans-Dieter Daniel
Purpose – The purpose of this paper is to present a narrative review of studies on the citing behavior of scientists, covering mainly research published in the last 15 years. Based on the results of these studies, the paper seeks to answer the question of the extent to which scientists are motivated to cite a publication not only to acknowledge intellectual and cognitive influences of scientific peers, but also for other, possibly non‐scientific, reasons.Design/methodology/approach – The review covers research published from the early 1960s up to mid‐2005 (approximately 30 studies on citing behavior‐reporting results in about 40 publications).Findings – The general tendency of the results of the empirical studies makes it clear that citing behavior is not motivated solely by the wish to acknowledge intellectual and cognitive influences of colleague scientists, since the individual studies reveal also other, in part non‐scientific, factors that play a part in the decision to cite. However, the results of t...
Journal of the Association for Information Science and Technology | 2007
Lutz Bornmann; Hans-Dieter Daniel
Jorge Hirsch (2005a, 2005b) recently proposed the h index to quantify the research output of individual scientists. The new index has attracted a lot of attention in the scientific community. The claim that the h index in a single number provides a good representation of the scientific lifetime achievement of a scientist as well as the (supposed) simple calculation of the h index using common literature databases lead to the danger of improper use of the index. We describe the advantages and disadvantages of the h index and summarize the studies on the convergent validity of this index. We also introduce corrections and complements as well as single-number alternatives to the h index.
Scientometrics | 2005
Lutz Bornmann; Hans-Dieter Daniel
SummaryHirsch (2005) has proposed the h-index as a single-number criterion to evaluate the scientific output of a researcher (Ball, 2005): A scientist has index h if h of his/her Np papers have at least h citations each, and the other (Np − h) papers have fewer than h citations each. In a study on committee peer review (Bornmann & Daniel, 2005) we found that on average the h-index for successful applicants for post-doctoral research fellowships was consistently higher than for non-successful applicants.
EMBO Reports | 2009
Lutz Bornmann; Hans-Dieter Daniel
How does one measure the quality of science? The question is not rhetorical; it is extremely relevant to promotion committees, funding agencies, national academies and politicians, all of whom need a means by which to recognize and reward good research and good researchers. Identifying high‐quality science is necessary for science to progress, but measuring quality becomes even more important in a time when individual scientists and entire research fields increasingly compete for limited amounts of money. The most obvious measure available is the bibliographic record of a scientist or research institute—that is, the number and impact of their publications. > Identifying high‐quality science is necessary for science to progress… Currently, the tool most widely used to determine the quality of scientific publications is the journal impact factor (IF), which is calculated by the scientific division of Thomson Reuters (New York, NY, USA) and is published annually in the Journal Citation Reports (JCR). The IF itself was developed in the 1960s by Eugene Garfield and Irving H. Sher, who were concerned that simply counting the number of articles a journal published in any given year would miss out small but influential journals in their Science Citation Index (Garfield, 2006). The IF is the average number of times articles from the journal published in the past two years have been cited in the JCR year and is calculated by dividing the number of citations in the JCR year—for example, 2007—by the total number of articles published in the two previous years—2005 and 2006. Owing to the availability and utility of the IF, promotion committees, funding agencies and scientists have taken to using it as a shorthand assessment of the quality of scientists or institutions, rather than only journals. As Garfield has noted, this use of the IF is often necessary, owing to time …
Journal of Informetrics | 2011
Lutz Bornmann; Rüdiger Mutz; Sven E. Hug; Hans-Dieter Daniel
This paper presents the first meta-analysis of studies that computed correlations between the h index and variants of the h index (such as the g index; in total 37 different variants) that have been proposed and discussed in the literature. A high correlation between the h index and its variants would indicate that the h index variants hardly provide added information to the h index. This meta-analysis included 135 correlation coefficients from 32 studies. The studies were based on a total sample size of N=9005; on average, each study had a sample size of n=257. The results of a three-level cross-classified mixed-effects meta-analysis show a high correlation between the h index and its variants: Depending on the model, the mean correlation coefficient varies between .8 and .9. This means that there is redundancy between most of the h index variants and the h index. There is a statistically significant study-to-study variation of the correlation coefficients in the information they yield. The lowest correlation coefficients with the h index are found for the h index variants MII and m index. Hence, these h index variants make a non-redundant contribution to the h index.
Journal of Informetrics | 2007
Lutz Bornmann; Ruediger Mutz; Hans-Dieter Daniel
Narrative reviews of peer review research have concluded that there is negligible evidence of gender bias in the awarding of grants based on peer review. Here, we report the findings of a meta-analysis of 21 studies providing, to the contrary, evidence of robust gender differences in grant award procedures. Even though the estimates of the gender effect vary substantially from study to study, the model estimation shows that all in all, among grant applicants men have statistically significant greater odds of receiving grants than women by about 7%.
Journal of Documentation | 2008
Christoph Neuhaus; Hans-Dieter Daniel
Purpose – The purpose of this paper is to provide an overview of new citation‐enhanced databases and to identify issues to be considered when they are used as a data source for performing citation analysis.Design/methodology/approach – The paper reports the limitations of Thomson Scientifics citation indexes and reviews the characteristics of the citation‐enhanced databases Chemical Abstracts, Google Scholar and Scopus.Findings – The study suggests that citation‐enhanced databases need to be examined carefully, with regard to both their potentialities and their limitations for citation analysis.Originality/value – The paper presents a valuable overview of new citation‐enhanced databases in the context of research evaluation.
Review of Educational Research | 2009
Herbert W. Marsh; Lutz Bornmann; Rüdiger Mutz; Hans-Dieter Daniel; Alison J O'Mara
Peer review is valued in higher education, but also widely criticized in terms of potential biases, particularly gender. We evaluate gender differences in peer reviews of grant applications, extending Bornmann, Mutz, and Daniel’s meta-analyses that reported small gender differences in favor of men (d = .04), but a substantial heterogeneity in effect sizes that compromised the robustness of their results. We contrast these findings with the most comprehensive single primary study (Marsh, Jayasinghe, and Bond) that found no gender differences for grant proposals. We juxtapose traditional (fixed- and random-effects) and multilevel models, demonstrating important advantages to the multilevel approach. Consistent with Marsh et al.’s primary study, there were no gender differences for the 40 (of 66) effect sizes from Bornmann et al. that were based on grant proposals. This lack of a gender effect for grant proposals was very robust, generalizing over country, discipline, and publication year
PLOS ONE | 2010
Lutz Bornmann; Rüdiger Mutz; Hans-Dieter Daniel
Background This paper presents the first meta-analysis for the inter-rater reliability (IRR) of journal peer reviews. IRR is defined as the extent to which two or more independent reviews of the same scientific document agree. Methodology/Principal Findings Altogether, 70 reliability coefficients (Cohens Kappa, intra-class correlation [ICC], and Pearson product-moment correlation [r]) from 48 studies were taken into account in the meta-analysis. The studies were based on a total of 19,443 manuscripts; on average, each study had a sample size of 311 manuscripts (minimum: 28, maximum: 1983). The results of the meta-analysis confirmed the findings of the narrative literature reviews published to date: The level of IRR (mean ICC/r2 = .34, mean Cohens Kappa = .17) was low. To explain the study-to-study variation of the IRR coefficients, meta-regression analyses were calculated using seven covariates. Two covariates that emerged in the meta-regression analyses as statistically significant to gain an approximate homogeneity of the intra-class correlations indicated that, firstly, the more manuscripts that a study is based on, the smaller the reported IRR coefficients are. Secondly, if the information of the rating system for reviewers was reported in a study, then this was associated with a smaller IRR coefficient than if the information was not conveyed. Conclusions/Significance Studies that report a high level of IRR are to be considered less credible than those with a low level of IRR. According to our meta-analysis the IRR of peer assessments is quite limited and needs improvement (e.g., reader system).
Journal of Informetrics | 2012
Lutz Bornmann; Hermann Schier; Werner Marx; Hans-Dieter Daniel
A number of bibliometric studies point out that citation counts are a function of many variables besides scientific quality. In this paper our aim is to investigate these factors that usually impact the number of citation counts, using an extensive data set from the field of chemistry. The data set contains roughly 2000 manuscripts that were submitted to the journal Angewandte Chemie International Edition (AC-IE) as short communications, reviewed by external reviewers, and either published in AC-IE or, if not accepted for publication by AC-IE, published elsewhere. As the reviewers’ ratings of the importance of the manuscripts’ results are also available to us, we can examine the extent to which certain factors that previous studies demonstrated to be generally correlated with citation counts increase the impact of papers, controlling for the quality of the manuscripts (as measured by reviewers’ ratings of the importance of the findings) in the statistical analysis. As the results show, besides being associated with quality, citation counts are correlated with the citation performance of the cited references, the language of the publishing journal, the chemical subfield, and the reputation of the authors. In this study no statistically significant correlation was found between citation counts and number of authors.