Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Carole Torgerson is active.

Publication


Featured researches published by Carole Torgerson.


British Journal of Educational Studies | 2001

The Need for Randomised Controlled Trials in Educational Research

Carole Torgerson; David Torgerson

This paper argues for more randomised controlled trials in educational research. Educational researchers have largely abandoned the methodology they helped to pioneer. This gold-standard methodology should be more widely used as it is an appropriate and robust research technique. Without subjecting curriculum innovations to a RCT then potentially harmful educational initiatives could be visited upon the nations children.


British Journal of Educational Studies | 2006

PUBLICATION BIAS: THE ACHILLES’ HEEL OF SYSTEMATIC REVIEWS?

Carole Torgerson

ABSTRACT:  The term ‘publication bias’ usually refers to the tendency for a greater proportion of statistically significant positive results of experiments to be published and, conversely, a greater proportion of statistically significant negative or null results not to be published. It is widely accepted in the fields of healthcare and psychological research to be a major threat to the validity of systematic reviews and meta-analyses. Some methodological work has previously been undertaken, by the author and others, in the field of educational research to investigate the extent of the problem. This paper describes the problem of publication bias with reference to its history in a number of fields, with special reference to the area of educational research. Informal methods for detecting publication bias in systematic reviews and meta-analyses of controlled trials are outlined and retrospective and prospective methods for dealing with the problem are suggested.


Journal of Child Psychology and Psychiatry | 2011

A systematic meta-analytic review of evidence for the effectiveness of the ‘Fast ForWord’ language intervention program

Gemma K. Strong; Carole Torgerson; David Torgerson; Charles Hulme

Background Fast ForWord is a suite of computer-based language intervention programs designed to improve childrens reading and oral language skills. The programs are based on the hypothesis that oral language difficulties often arise from a rapid auditory temporal processing deficit that compromises the development of phonological representations. Methods A systematic review was designed, undertaken and reported using items from the PRISMA statement. A literature search was conducted using the terms ‘Fast ForWord’ ‘Fast For Word’ ‘Fastforword’ with no restriction on dates of publication. Following screening of (a) titles and abstracts and (b) full papers, using pre-established inclusion and exclusion criteria, six papers were identified as meeting the criteria for inclusion (randomised controlled trial (RCT) or matched group comparison studies with baseline equivalence published in refereed journals). Data extraction and analyses were carried out on reading and language outcome measures comparing the Fast ForWord intervention groups to both active and untreated control groups. Results Meta-analyses indicated that there was no significant effect of Fast ForWord on any outcome measure in comparison to active or untreated control groups. Conclusions There is no evidence from the analysis carried out that Fast ForWord is effective as a treatment for childrens oral language or reading difficulties.


The Lancet | 2010

A framework for mandatory impact evaluation to ensure well informed public policy decisions

Andrew D Oxman; Arild Bjørndal; Francisco Becerra-Posada; Mark Gibson; Miguel Angel Gonzalez Block; Andy Haines; Maimunah Hamid; Carmen Hooker Odom; Haichao Lei; Ben Levin; Mark W. Lipsey; Julia H. Littell; Hassan Mshinda; Pierre Ongolo-Zogo; Tikki Pang; Nelson Sewankambo; Francisco Songane; Haluk Soydan; Carole Torgerson; David Weisburd; Judith A. Whitworth; Suwit Wibulpolprasert

Trillions of dollars are invested yearly in programmes to improve health, social welfare, education, and justice (which we will refer to generally as public programmes). Yet we know little about the eff ects of most of these attempts to improve peoples’ lives, and what we do know is often not used to inform decisions. We propose that governments and non-governmental organisations (NGOs) address this failure responsibly by mandating more systematic and transparent use of research evidence to assess the likely eff ects of public programmes before they are launched, and the better use of well designed impact evaluations after they are launched. Resources for public programmes will always be scarce. In low-income and middle-income countries, where there are often particularly severe constraints on resources and many competing priorities, available resources have to be used as effi ciently as possible to address important challenges and goals, such as the Millennium Development Goals. Use of research evidence to inform decisions is crucial. As suggested by Hassan Mshinda, the Director-General of the Commission for Science and Technology in Tanzania: “If you are poor, actually you need more evidence before you invest, rather than if you are rich.” But neither the problem nor the need for solutions is limited either to health or countries of low and middle income. Expenditures and the potential for waste are greatest in high-income countries, which also have restricted resources and unmet needs, particularly during a fi nancial crisis. Having good evidence to inform diffi cult decisions can be politically attractive, as shown, for example, by the US Government’s decision to include US


Journal of Research in Reading | 2002

A systematic review and meta-analysis of the effectiveness of information and communication technology (ICT) on the teaching of spelling

Carole Torgerson; Diana Elbourne

1·1 billion for comparative research (including systematic reviews and clinical trials) as part of its


Medical Education | 2002

Educational research and randomised trials

Carole Torgerson

787 billion economic stimulus bill. To paraphrase Billy Beane, Newt Gingrich, and John Kerry, who have argued for a health-care system that is driven by robust comparative clinical evidence by substituting policy makers for doctors: “Evidence-based health care would not strip [policymakers] of their decision-making authority nor replace their expertise. Instead, data and evidence should complement a lifetime of experience, so that [policymakers] can deliver the best quality care at the lowest possible cost.” Lancet 2010; 375: 427–31


Oxford Review of Education | 2012

Single Group, Pre- and Post-Test Research Designs: Some Methodological Concerns.

Emma Marsden; Carole Torgerson

Recent Government policy in England and Wales on Information and Communication Technology (ICT) in schools is heavily influenced by a series of non-randomised controlled studies. The evidence from these evaluations is equivocal with respect to the effect of ICT on literacy. In order to ascertain whether there is any effect of ICT on one small area of literacy, spelling, a systematic review of all randomised controlled trials (RCTs) was undertaken. Relevant electronic databases (including BEI, ERIC, Web of Science, PsycINFO, The Cochrane Library) were searched. Seven relevant RCTs were identified and included in the review. When six of the seven studies were pooled in a meta-analysis there was an effect, not statistically significant, in favour of computer interventions (Effect size = 0.37, 95% confidence interval =–0.02 to 0.77, p = 0.06). Sensitivity and sub-group analyses of the results did not materially alter findings. This review suggests that the teaching of spelling by using computer software may be as effective as conventional teaching of spelling, although the possibility of computer-taught spelling being inferior or superior cannot be confidently excluded due to the relatively small sample sizes of the identified studies. Ideally, large pragmatic randomised controlled trials need to be undertaken.


Journal of Further and Higher Education | 2005

Writing for publication: what counts as a ‘high status, eminent academic journal’?

Jerry Wellington; Carole Torgerson

Within most research fields where the use of the randomised controlled trial (RCT) is feasible, it is generally acknowledged as representing the gold standard of evaluative research. An exception to this is the area of educational research. In contrast with health care researchers, educational researchers rarely use the RCT method. Instead, they choose either to rely on other, manifestly inferior, quantitative methods, such as case control or cohort studies, or to use qualitative methodologies. The latter approach can yield information about strategies and processes, but it cannot assess effectiveness.


Educational Studies | 2006

Is an intervention using computer software effective in literacy learning? A randomised controlled trial

Greg Brooks; Jeremy N. V. Miles; Carole Torgerson; David Torgerson

This article provides two illustrations of some of the factors that can influence findings from pre- and post-test research designs in evaluation studies, including regression to the mean (RTM), maturation, history and test effects. The first illustration involves a re-analysis of data from a study by Marsden (2004), in which pre-test scores are plotted against gain scores to demonstrate RTM effects. The second illustration is a methodological review of single group, pre- and post-test research designs (pre-experiments) that evaluate causal relationships between intervention and outcome. Re-analysis of Marsden’s prior data shows that learners with higher baseline scores consistently made smaller gains than those with lower baseline scores, demonstrating that RTM is clearly observable in single group, pre-post test designs. Our review found that 13% of the sample of 490 articles were evaluation studies. Of these evaluation studies, about half used an experimental design. However, a quarter used a single group, pre-post test design, and researchers using these designs did not mention possible RTM effects in their explanations, although other explanatory factors were mentioned. We conclude by describing how using experimental or quasi-experimental designs would have enabled researchers to explain their findings more accurately, and to draw more useful implications for pedagogy.


British Journal of Educational Studies | 2003

Avoiding Bias in Randomised Controlled Trials in Educational Research

David Torgerson; Carole Torgerson

One of the key professional concerns for staff working in higher and, to some extent, further education is to be involved in scholarly publishing. Within that concern, as a direct result of recent research assessment exercises, is the question of ‘what counts as a high status journal’? The question is a vitally important one for lecturers and researchers who are striving to achieve highly in research ratings. Yet it has not been the subject of either wide debate or extensive empirical work. In an attempt to provoke discussion on this issue, this article reports and discusses a recent survey of professors in both the UK and USA which aimed to elicit their views on the criteria for eminence and high status and also asked them to name journals that come into this category. Although the responses show that there is some degree of consensus, large differences exist between the views of individual respondents. In addition, a large number of journals are named as being of ‘high status’. Important differences between perceptions in the UK and USA are also highlighted. The aim of the paper is to open up a debate on journals and their status in a way which might be valuable for personal and institutional development in the future.

Collaboration


Dive into the Carole Torgerson's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Greg Brooks

University of Sheffield

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge