Ruth Fairclough
University of Wolverhampton
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ruth Fairclough.
Journal of Information Science | 2004
Gareth Harries; David Wilkinson; Liz Price; Ruth Fairclough; Mike Thelwall
Hyperlinks between academic web sites, like citations, can potentially be used to map disciplinary structures and identify evidence of connections between disciplines. In this paper we classified a sample of links originating in three different disciplines: maths, physics and sociology. Links within a discipline were found to be different in character to links between pages in different disciplines. There were also disciplinary differences in both types of link. As a consequence, we argue that interpretations of web science maps covering multiple disciplines will need to be sensitive to the contexts of the links mapped.
Journal of the Association for Information Science and Technology | 2006
Mike Thelwall; Katie Vann; Ruth Fairclough
In this article Web issue analysis is introduced as a new technique to investigate an issue as reflected on the Web. The issue chosen, integrated water resource management (IWRM), is a United Nations–initiated paradigm for managing water resources in an international context, particularly in developing nations. As with many international governmental initiatives, there is a considerable body of online information about it: 41,381 hypertext markup language (HTML) pages and 28,735 PDF documents mentioning the issue were downloaded. A page uniform resource locator (URL) and link analysis revealed the international and sectoral spread of IWRM. A noun and noun phrase occurrence analysis was used to identify the issues most commonly discussed, revealing some unexpected topics such as private sector and economic growth. Although the complexity of the methods required to produce meaningful statistics from the data is disadvantageous to easy interpretation, it was still possible to produce data that could be subject to a reasonably intuitive interpretation. Hence Web issue analysis is claimed to be a useful new technique for information science.
Journal of Informetrics | 2015
Ruth Fairclough; Mike Thelwall
National research impact indicators derived from citation counts are used by governments to help assess their national research performance and to identify the effect of funding or policy changes. Citation counts lag research by several years, however, and so their information is somewhat out of date. Some of this lag can be avoided by using readership counts from the social reference sharing site Mendeley because these accumulate more quickly than citations. This article introduces a method to calculate national research impact indicators from Mendeley, using citation counts from older time periods to partially compensate for international biases in Mendeley readership. A refinement to accommodate recent national changes in Mendeley uptake makes little difference, despite being theoretically more accurate. The Mendeley patterns using the methods broadly reflect the results from similar calculations with citations and seem to reflect impact trends about a year earlier. Nevertheless, the reasons for the differences between the indicators from the two data sources are unclear.
Journal of Informetrics | 2015
Mike Thelwall; Ruth Fairclough
Journal impact factors (JIFs) are widely used and promoted but have important limitations. In particular, JIFs can be unduly influenced by individual highly cited articles and hence are inherently unstable. A logical way to reduce the impact of individual high citation counts is to use the geometric mean rather than the standard mean in JIF calculations. Based upon journal rankings 2004-2014 in 50 sub-categories within 5 broad categories, this study shows that journal rankings based on JIF variants tend to be more stable over time if the geometric mean is used rather than the standard mean. The same is true for JIF variants using Mendeley reader counts instead of citation counts. Thus, although the difference is not large, the geometric mean is recommended instead of the arithmetic mean for future JIF calculations. In addition, Mendeley readership-based JIF variants are as stable as those using Scopus citations, confirming the value of Mendeley readership as an academic impact indicator.
Journal of Informetrics | 2015
Ruth Fairclough; Mike Thelwall
Governments sometimes need to analyse sets of research papers within a field in order to monitor progress, assess the effect of recent policy changes, or identify areas of excellence. They may compare the average citation impacts of the papers by dividing them by the world average for the field and year. Since citation data is highly skewed, however, simple averages may be too imprecise to robustly identify differences within, rather than across, fields. In response, this article introduces two new methods to identify national differences in average citation impact, one based on linear modelling for normalised data and the other using the geometric mean. Results from a sample of 26 Scopus fields between 2009 and 2015 show that geometric means are the most precise and so are recommended for smaller sample sizes, such as for individual fields. The regression method has the advantage of distinguishing between national contributions to internationally collaborative articles, but has substantially wider confidence intervals than the geometric mean, undermining its value for any except the largest sample sizes.
Journal of Informetrics | 2015
Mike Thelwall; Ruth Fairclough
Although various citation-based indicators are commonly used to help research evaluations, there are ongoing controversies about their value. In response, they are often correlated with quality ratings or with other quantitative indicators in order to partly assess their validity. When correlations are calculated for sets of publications from multiple disciplines or years, however, the magnitude of the correlation coefficient may be reduced, masking the strength of the underlying correlation. In response, this article uses simulations to systematically investigate the extent to which mixing years or disciplines reduces correlations. The results show that mixing two sets of articles with different correlation strengths can reduce the correlation for the combined set to substantially below the average of the two. Moreover, even mixing two sets of articles with the same correlation strength but different mean citation counts can substantially reduce the correlation for the combined set. The extent of the reduction in correlation also depends upon whether the articles assessed have been pre-selected for being high quality and whether the relationship between the quality ratings and citation counts is linear or exponential. The results underline the importance of using homogeneous data sets but also help to interpret correlation coefficients when this is impossible.
Journal of Informetrics | 2017
Mike Thelwall; Ruth Fairclough
When comparing the average citation impact of research groups, universities and countries, field normalisation reduces the influence of discipline and time. Confidence intervals for these indicators can help with attempts to infer whether differences between sets of publications are due to chance factors. Although both bootstrapping and formulae have been proposed for these, their accuracy is unknown. In response, this article uses simulated data to systematically compare the accuracy of confidence limits in the simplest possible case, a single field and year. The results suggest that the MNLCS (Mean Normalised Log-transformed Citation Score) confidence interval formula is conservative for large groups but almost always safe, whereas bootstrap MNLCS confidence intervals tend to be accurate but can be unsafe for smaller world or group sample sizes. In contrast, bootstrap MNCS (Mean Normalised Citation Score) confidence intervals can be very unsafe, although their accuracy increases with sample sizes.
Information Processing and Management | 2006
Mike Thelwall; Saheeda Thelwall; Ruth Fairclough
Web issue analysis, a new automated technique designed to rapidly give timely management intelligence about a topic from an automated large-scale analysis of relevant pages from the Web, is introduced and demonstrated. The technique includes hyperlink and URL analysis to identify common direct and indirect sources of Web information. In addition, text analysis through natural language processing techniques is used identify relevant common nouns and noun phrases. A case study approach is taken, applying Web issue analysis to the topic of nurse prescribing. The results are presented in descriptive form and a qualitative analysis is used to argue that new information has been found. The nurse prescribing results demonstrate interesting new findings, such as the parochial nature of the topic in the UK, an apparent absence of similar concepts internationally, at least in the English-speaking world, and a significant concern with mental health issues. These demonstrate that automated Web issue analysis is capable of quickly delivering new insights into a problem. General limitations are that the success of Web issue analysis is dependant upon the particular topic chosen and the ability to find a phrase that accurately captures the topic and is not used in other contexts, as well as being language-specific.
Journal of Informetrics | 2017
Mike Thelwall; Ruth Fairclough
Policy makers and managers sometimes assess the share of research produced by a group (country, department, institution). This takes the form of the percentage of publications in a journal, field or broad area that has been published by the group. This quantity is affected by essentially random influences that obscure underlying changes over time and differences between groups. A model of research production is needed to help identify whether differences between two shares indicate underlying differences. This article introduces a simple production model for indicators that report the share of the worlds output in a journal or subject category, assuming that every new article has the same probability to be authored by a given group. With this assumption, confidence limits can be calculated for the underlying production capability (i.e., probability to publish). The results of a time series analysis of national contributions to 36 large monodisciplinary journals 1996-2016 are broadly consistent with this hypothesis. Follow up tests of countries and institutions in 26 Scopus subject categories support the conclusions but highlight the importance of ensuring consistent subject category coverage.
Journal of the Association for Information Science and Technology | 2006
Mike Thelwall; Rudy Prabowo; Ruth Fairclough