Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lutz Bornmann is active.

Publication


Featured researches published by Lutz Bornmann.


Journal of Documentation | 2008

What do citation counts measure? A review of studies on citing behavior

Lutz Bornmann; Hans-Dieter Daniel

Purpose – The purpose of this paper is to present a narrative review of studies on the citing behavior of scientists, covering mainly research published in the last 15 years. Based on the results of these studies, the paper seeks to answer the question of the extent to which scientists are motivated to cite a publication not only to acknowledge intellectual and cognitive influences of scientific peers, but also for other, possibly non‐scientific, reasons.Design/methodology/approach – The review covers research published from the early 1960s up to mid‐2005 (approximately 30 studies on citing behavior‐reporting results in about 40 publications).Findings – The general tendency of the results of the empirical studies makes it clear that citing behavior is not motivated solely by the wish to acknowledge intellectual and cognitive influences of colleague scientists, since the individual studies reveal also other, in part non‐scientific, factors that play a part in the decision to cite. However, the results of t...


Journal of the Association for Information Science and Technology | 2007

What do we know about the h index

Lutz Bornmann; Hans-Dieter Daniel

Jorge Hirsch (2005a, 2005b) recently proposed the h index to quantify the research output of individual scientists. The new index has attracted a lot of attention in the scientific community. The claim that the h index in a single number provides a good representation of the scientific lifetime achievement of a scientist as well as the (supposed) simple calculation of the h index using common literature databases lead to the danger of improper use of the index. We describe the advantages and disadvantages of the h index and summarize the studies on the convergent validity of this index. We also introduce corrections and complements as well as single-number alternatives to the h index.


Scientometrics | 2005

Does the h-index for ranking of scientists really work?

Lutz Bornmann; Hans-Dieter Daniel

SummaryHirsch (2005) has proposed the h-index as a single-number criterion to evaluate the scientific output of a researcher (Ball, 2005): A scientist has index h if h of his/her Np papers have at least h citations each, and the other (Np − h) papers have fewer than h citations each. In a study on committee peer review (Bornmann & Daniel, 2005) we found that on average the h-index for successful applicants for post-doctoral research fellowships was consistently higher than for non-successful applicants.


EMBO Reports | 2009

The state of h index research. Is the h index the ideal way to measure research performance

Lutz Bornmann; Hans-Dieter Daniel

How does one measure the quality of science? The question is not rhetorical; it is extremely relevant to promotion committees, funding agencies, national academies and politicians, all of whom need a means by which to recognize and reward good research and good researchers. Identifying high‐quality science is necessary for science to progress, but measuring quality becomes even more important in a time when individual scientists and entire research fields increasingly compete for limited amounts of money. The most obvious measure available is the bibliographic record of a scientist or research institute—that is, the number and impact of their publications. > Identifying high‐quality science is necessary for science to progress… Currently, the tool most widely used to determine the quality of scientific publications is the journal impact factor (IF), which is calculated by the scientific division of Thomson Reuters (New York, NY, USA) and is published annually in the Journal Citation Reports (JCR). The IF itself was developed in the 1960s by Eugene Garfield and Irving H. Sher, who were concerned that simply counting the number of articles a journal published in any given year would miss out small but influential journals in their Science Citation Index (Garfield, 2006). The IF is the average number of times articles from the journal published in the past two years have been cited in the JCR year and is calculated by dividing the number of citations in the JCR year—for example, 2007—by the total number of articles published in the two previous years—2005 and 2006. Owing to the availability and utility of the IF, promotion committees, funding agencies and scientists have taken to using it as a shorthand assessment of the quality of scientists or institutions, rather than only journals. As Garfield has noted, this use of the IF is often necessary, owing to time …


association for information science and technology | 2015

Growth rates of modern science: A bibliometric analysis based on the number of publications and cited references

Lutz Bornmann; Ruediger Mutz

Many studies (in information science) have looked at the growth of science. In this study, we reexamine the question of the growth of science. To do this we (a) use current data up to publication year 2012 and (b) analyze the data across all disciplines and also separately for the natural sciences and for the medical and health sciences. Furthermore, the data were analyzed with an advanced statistical technique—segmented regression analysis—which can identify specific segments with similar growth rates in the history of science. The study is based on two different sets of bibliometric data: (a) the number of publications held as source items in the Web of Science (WoS, Thomson Reuters) per publication year and (b) the number of cited references in the publications of the source items per cited reference year. We looked at the rate at which science has grown since the mid‐1600s. In our analysis of cited references we identified three essential growth phases in the development of science, which each led to growth rates tripling in comparison with the previous phase: from less than 1% up to the middle of the 18th century, to 2 to 3% up to the period between the two world wars, and 8 to 9% to 2010.


Journal of the Association for Information Science and Technology | 2013

What is societal impact of research and how can it be assessed? a literature survey

Lutz Bornmann

Since the 1990s, the scope of research evaluations becomes broader as the societal products outputs, societal use societal references, and societal benefits changes in society of research come into scope. Society can reap the benefits of successful research studies only if the results are converted into marketable and consumable products e.g., medicaments, diagnostic tools, machines, and devices or services. A series of different names have been introduced which refer to the societal impact of research: third stream activities, societal benefits, societal quality, usefulness, public values, knowledge transfer, and societal relevance. What most of these names are concerned with is the assessment of social, cultural, environmental, and economic returns impact and effects from results research output or products research outcome of publicly funded research. This review intends to present existing research on and practices employed in the assessment of societal impact in the form of a literature survey. The objective is for this review to serve as a basis for the development of robust and reliable methods of societal impact measurement.


Journal of Informetrics | 2011

A multilevel meta-analysis of studies reporting correlations between the h index and 37 different h index variants

Lutz Bornmann; Rüdiger Mutz; Sven E. Hug; Hans-Dieter Daniel

This paper presents the first meta-analysis of studies that computed correlations between the h index and variants of the h index (such as the g index; in total 37 different variants) that have been proposed and discussed in the literature. A high correlation between the h index and its variants would indicate that the h index variants hardly provide added information to the h index. This meta-analysis included 135 correlation coefficients from 32 studies. The studies were based on a total sample size of N=9005; on average, each study had a sample size of n=257. The results of a three-level cross-classified mixed-effects meta-analysis show a high correlation between the h index and its variants: Depending on the model, the mean correlation coefficient varies between .8 and .9. This means that there is redundancy between most of the h index variants and the h index. There is a statistically significant study-to-study variation of the correlation coefficients in the information they yield. The lowest correlation coefficients with the h index are found for the h index variants MII and m index. Hence, these h index variants make a non-redundant contribution to the h index.


Journal of the Association for Information Science and Technology | 2011

Turning the tables on citation analysis one more time: Principles for comparing sets of documents

Loet Leydesdorff; Lutz Bornmann; Ruediger Mutz; Tobias Opthof

We submit newly developed citation impact indicators based not on arithmetic averages of citations but on percentile ranks. Citation distributions are—as a rule—highly skewed and should not be arithmetically averaged. With percentile ranks, the citation score of each paper is rated in terms of its percentile in the citation distribution. The percentile ranks approach allows for the formulation of a more abstract indicator scheme that can be used to organize and/or schematize different impact indicators according to three degrees of freedom: the selection of the reference sets, the evaluation criteria, and the choice of whether or not to define the publication sets as independent. Bibliometric data of seven principal investigators (PIs) of the Academic Medical Center of the University of Amsterdam are used as an exemplary dataset. We demonstrate that the proposed family indicators [R(6), R(100), R(6, k), R(100, k)] are an improvement on averages-based indicators because one can account for the shape of the distributions of citations over papers.


Journal of Informetrics | 2014

Do altmetrics point to the broader impact of research? An overview of benefits and disadvantages of altmetrics

Lutz Bornmann

Today, it is not clear how the impact of research on other areas of society than science should be measured. While peer review and bibliometrics have become standard methods for measuring the impact of research in science, there is not yet an accepted framework within which to measure societal impact. Alternative metrics (called altmetrics to distinguish them from bibliometrics) are considered an interesting option for assessing the societal impact of research, as they offer new ways to measure (public) engagement with research output. Altmetrics is a term to describe web-based metrics for the impact of publications and other scholarly material by using data from social media platforms (e.g. Twitter or Mendeley). This overview of studies explores the potential of altmetrics for measuring societal impact. It deals with the definition and classification of altmetrics. Furthermore, their benefits and disadvantages for measuring impact are discussed.


Journal of Informetrics | 2012

The new Excellence Indicator in the World Report of the SCImago Institutions Rankings 2011

Lutz Bornmann; Félix de Moya Anegón; Loet Leydesdorff

The new excellence indicator in the World Report of the SCImago Institutions Rankings (SIR) makes it possible to test differences in the ranking in terms of statistical significance. For example, at the 17th position of these rankings, UCLA has an output of 37,994 papers with an excellence indicator of 28.9. Stanford University follows at the 19th position with 37,885 papers and 29.1 excellence, and z = - 0.607. The difference between these two institution thus is not statistically significant. We provide a calculator at this http URL in which one can fill out this test for any two institutions and also for each institution on whether its score is significantly above or below expectation (assuming that 10% of the papers are for stochastic reasons in the top-10% set).

Collaboration


Dive into the Lutz Bornmann's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Félix de Moya Anegón

Spanish National Research Council

View shared research outputs
Researchain Logo
Decentralizing Knowledge