Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sven E. Hug is active.

Publication


Featured researches published by Sven E. Hug.


Journal of Informetrics | 2011

A multilevel meta-analysis of studies reporting correlations between the h index and 37 different h index variants

Lutz Bornmann; Rüdiger Mutz; Sven E. Hug; Hans-Dieter Daniel

This paper presents the first meta-analysis of studies that computed correlations between the h index and variants of the h index (such as the g index; in total 37 different variants) that have been proposed and discussed in the literature. A high correlation between the h index and its variants would indicate that the h index variants hardly provide added information to the h index. This meta-analysis included 135 correlation coefficients from 32 studies. The studies were based on a total sample size of N=9005; on average, each study had a sample size of n=257. The results of a three-level cross-classified mixed-effects meta-analysis show a high correlation between the h index and its variants: Depending on the model, the mean correlation coefficient varies between .8 and .9. This means that there is redundancy between most of the h index variants and the h index. There is a statistically significant study-to-study variation of the correlation coefficients in the information they yield. The lowest correlation coefficients with the h index are found for the h index variants MII and m index. Hence, these h index variants make a non-redundant contribution to the h index.


Scientometrics | 2017

Citation analysis with microsoft academic

Sven E. Hug; Michael Ochsner; Martin Brändle

We explore if and how Microsoft Academic (MA) could be used for bibliometric analyses. First, we examine the Academic Knowledge API (AK API), an interface to access MA data, and compare it to Google Scholar (GS). Second, we perform a comparative citation analysis of researchers by normalizing data from MA and Scopus. We find that MA offers structured and rich metadata, which facilitates data retrieval, handling and processing. In addition, the AK API allows retrieving frequency distributions of citations. We consider these features to be a major advantage of MA over GS. However, we identify four main limitations regarding the available metadata. First, MA does not provide the document type of a publication. Second, the “fields of study” are dynamic, too specific and field hierarchies are incoherent. Third, some publications are assigned to incorrect years. Fourth, the metadata of some publications did not include all authors. Nevertheless, we show that an average-based indicator (i.e. the journal normalized citation score; JNCS) as well as a distribution-based indicator (i.e. percentile rank classes; PR classes) can be calculated with relative ease using MA. Hence, normalization of citation counts is feasible with MA. The citation analyses in MA and Scopus yield uniform results. The JNCS and the PR classes are similar in both databases, and, as a consequence, the evaluation of the researchers’ publication impact is congruent in MA and Scopus. Given the fast development in the last year, we postulate that MA has the potential to be used for full-fledged bibliometric analyses.


Scientometrics | 2017

The coverage of Microsoft Academic: analyzing the publication output of a university

Sven E. Hug; Martin Brändle

This is the first detailed study on the coverage of Microsoft Academic (MA). Based on the complete and verified publication list of a university, the coverage of MA was assessed and compared with two benchmark databases, Scopus and Web of Science (WoS), on the level of individual publications. Citation counts were analyzed, and issues related to data retrieval and data quality were examined. A Perl script was written to retrieve metadata from MA based on publication titles. The script is freely available on GitHub. We find that MA covers journal articles, working papers, and conference items to a substantial extent and indexes more document types than the benchmark databases (e.g., working papers, dissertations). MA clearly surpasses Scopus and WoS in covering book-related document types and conference items but falls slightly behind Scopus in journal articles. The coverage of MA is favorable for evaluative bibliometrics in most research fields, including economics/business, computer/information sciences, and mathematics. However, MA shows biases similar to Scopus and WoS with regard to the coverage of the humanities, non-English publications, and open-access publications. Rank correlations of citation counts are high between MA and the benchmark databases. We find that the publication year is correct for 89.5% of all publications and the number of authors is correct for 95.1% of the journal articles. Given the fast and ongoing development of MA, we conclude that MA is on the verge of becoming a bibliometric superpower. However, comprehensive studies on the quality of MA metadata are still lacking.


Scientometrics | 2018

The number of linked references of publications in Microsoft Academic in comparison with the Web of Science

Robin Haunschild; Sven E. Hug; Martin Brändle; Lutz Bornmann

In the context of a comprehensive Microsoft Academic (MA) study, we explored in an initial step the quality of linked references data in MA in comparison with Web of Science (WoS). Linked references are the backbone of bibliometrics, because they are the basis of the times cited information in citation indexes. We found that the concordance of linked references between MA and WoS ranges from weak to nonexistent for the full sample (publications of the University of Zurich with less than 50 linked references in MA). An analysis with a sample restricted to less than 50 linked references in WoS showed a strong agreement between linked references in MA and WoS.


Scientometrics | 2018

Visualizing the context of citations referencing papers published by Eugene Garfield: a new type of keyword co-occurrence analysis

Lutz Bornmann; Robin Haunschild; Sven E. Hug

During Eugene Garfield’s (EG’s) lengthy career as information scientist, he published about 1500 papers. In this study, we use the impressive oeuvre of EG to introduce a new type of bibliometric networks: keyword co-occurrences networks based on the context of citations, which are referenced in a certain paper set (here: the papers published by EG). The citation context is defined by the words which are located around a specific citation. We retrieved the citation context from Microsoft Academic. To interpret and compare the results of the new network type, we generated two further networks: co-occurrence networks which are based on title and abstract keywords from (1) EG’s papers and (2) the papers citing EG’s publications. The comparison of the three networks suggests that papers of EG and citation contexts of papers citing EG are semantically more closely related to each other than to titles and abstracts of papers citing EG. This result accords with the use of citations in research evaluation that is based on the premise that citations reflect the cognitive influence of the cited on the citing publication.


Research Assessment in the Humanities: Towards Criteria and Procedures | 2016

Humanities Scholars’ Conceptions of Research Quality

Michael Ochsner; Sven E. Hug; Hans-Dieter Daniel

The assessment of research performance in the humanities is linked to the question of what humanities scholars perceive as ‘good research’. Even though scholars themselves evaluate research on a daily basis, e.g. while reading other scholars’ research, not much is known about the quality concepts scholars rely on in their judgment of research. This chapter presents a project funded by the Rectors’ Conference of the Swiss Universities, in which humanities scholars’ conceptions of research quality were investigated and translated into an approach to research evaluation in the humanities. The approach involves the scholars of a given discipline and seeks to identify agreed-upon concepts of quality. By applying the approach to three humanities disciplines, the project reveals both the opportunities and limitations of research quality assessment in the humanities: A research assessment by means of quality criteria presents opportunities to make visible and evaluate humanities research, while a quantitative assessment by means of indicators is very limited and is not accepted by scholars. However, indicators that are linked to the humanities scholars’ notions of quality can be used to support peers in the evaluation process (i.e. informed peer review).


Research Assessment in the Humanities: Towards Criteria and Procedures | 2016

Research Assessment in the Humanities: Introduction

Michael Ochsner; Sven E. Hug; Hans-Dieter Daniel

Research assessments in the humanities are highly controversial. While citation-based research performance indicators are widely used in the natural and life sciences, quantitative measures for research performance meet strong opposition in the humanities. Since there are many problems connected to the use of bibliometrics in the humanities, new approaches have to be considered for the assessment of humanities research. Recently, concepts and methods for measuring research quality in the humanities have been developed in several countries. The edited volume ‘Research Assessment in the Humanities: Towards Criteria and Procedures’ analyses and discusses these recent developments in depth. It combines the presentation of state-of-the-art projects on research assessments in the humanities by humanities scholars themselves with a description of the evaluation of humanities research in practice presented by research funders. Bibliometric issues concerning humanities research complete the exhaustive analysis of humanities research assessment.


SOCIOLOGIA E POLITICHE SOCIALI | 2015

Quality criteria for sociology? What sociologists can learn from the project «developing and testing research quality criteria in the humanities»

Michael Ochsner; Tobias Wolbring; Sven E. Hug

Universities take an important role in the knowledge-society. For reasons of accountability to the public or in order to assure or enhance research quality, many universities implemented assessment procedures, often using bibliometric and other performance indicators. These procedures are mostly developed in a data-driven manner and not much is known about what the indicators in these procedures actually measure and how they affect behavior. Furthermore, the methods stem from the natural and life sciences and cannot be readily transferred to the social sciences and humanities. In this article, we present (i) quality criteria for research from the perspective of humanities scholars and how they can be transferred to sociology (ii) summarise the opportunities and limitations of the research rating of the German Council of Science and Humanities, and (iii) suggest that sociology as a discipline should develop a discipline-specific approach to research evaluation that takes into account the sociology scholars’ notions of quality and the disciplines’ research practices, that is bottom-up in nature, and uses both quantitative as well as qualitative data.


Research Evaluation | 2013

Criteria for assessing research quality in the humanities: a Delphi study among scholars of English literature, German literature and art history

Sven E. Hug; Michael Ochsner; Hans-Dieter Daniel


Research Evaluation | 2012

Four types of research in the humanities: Setting the stage for research quality criteria in the humanities

Michael Ochsner; Sven E. Hug; Hans-Dieter Daniel

Collaboration


Dive into the Sven E. Hug's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge