Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sara Schroter is active.

Publication


Featured researches published by Sara Schroter.


PLOS Medicine | 2013

Prognosis Research Strategy (PROGRESS) 3: Prognostic Model Research

Ewout W. Steyerberg; Karl G.M. Moons; D.A.W.M. van der Windt; Jill Hayden; Pablo Perel; Sara Schroter; Richard D Riley; Harry Hemingway; Douglas G. Altman

In this article, the third in the PROGRESS series on prognostic factor research, Sara Schroter and colleagues review how prognostic models are developed and validated, and then address how prognostic models are assessed for their impact on practice and patient outcomes, illustrating these ideas with examples.


PLOS Medicine | 2013

Prognosis Research Strategy (PROGRESS) 2: Prognostic Factor Research

Richard D Riley; Jill Hayden; Ewout W. Steyerberg; Karel G.M. Moons; Keith R. Abrams; Panayiotis A. Kyzas; Núria Malats; Andrew Briggs; Sara Schroter; Douglas G. Altman; Harry Hemingway

In the second article in the PROGRESS series on prognostic factor research, Sara Schroter and colleagues discuss the role of prognostic factors in current clinical practice, randomised trials, and developing new interventions, and explain why and how prognostic factor research should be improved.


BMJ | 2013

Prognosis research strategy (PROGRESS) 1: a framework for researching clinical outcomes.

Harry Hemingway; Peter Croft; Pablo Perel; Jill Hayden; Keith R. Abrams; Adam Timmis; Andrew Briggs; Ruzan Udumyan; Karel G.M. Moons; Ewout W. Steyerberg; Ian Roberts; Sara Schroter; Douglas G. Altman; Richard D Riley

Understanding and improving the prognosis of a disease or health condition is a priority in clinical research and practice. In this article, the authors introduce a framework of four interrelated themes in prognosis research, describe the importance of the first of these themes (understanding future outcomes in relation to current diagnostic and treatment practices), and introduce recommendations for the field of prognosis research


BMJ | 2013

Prognosis research strategy (PROGRESS) 4: Stratified medicine research

Aroon D. Hingorani; Danielle van der Windt; Richard D Riley; Keith R. Abrams; Karel G.M. Moons; Ewout W. Steyerberg; Sara Schroter; Willi Sauerbrei; Douglas G. Altman; Harry Hemingway

In patients with a particular disease or health condition, stratified medicine seeks to identify those who will have the most clinical benefit or least harm from a specific treatment. In this article, the fourth in the PROGRESS series, the authors discuss why prognosis research should form a cornerstone of stratified medicine, especially in regard to the identification of factors that predict individual treatment response


Journal of the Royal Society of Medicine | 2008

What errors do peer reviewers detect, and does training improve their ability to detect them?

Sara Schroter; Nick Black; Stephen Evans; Fiona Godlee; Lyda Osorio; Richard Smith

Abstract Objective To analyse data from a trial and report the frequencies with which major and minor errors are detected at a general medical journal, the types of errors missed and the impact of training on error detection. Design 607 peer reviewers at the BMJ were randomized to two intervention groups receiving different types of training (face-to-face training or a self-taught package) and a control group. Each reviewer was sent the same three test papers over the study period, each of which had nine major and five minor methodological errors inserted. Setting BMJ peer reviewers. Main outcome measures The quality of review, assessed using a validated instrument, and the number and type of errors detected before and after training. Results The number of major errors detected varied over the three papers. The interventions had small effects. At baseline (Paper 1) reviewers found an average of 2.58 of the nine major errors, with no notable difference between the groups. The mean number of errors reported was similar for the second and third papers, 2.71 and 3.0, respectively. Biased randomization was the error detected most frequently in all three papers, with over 60% of reviewers rejecting the papers identifying this error. Reviewers who did not reject the papers found fewer errors and the proportion finding biased randomization was less than 40% for each paper. Conclusions Editors should not assume that reviewers will detect most major errors, particularly those concerned with the context of study. Short training packages have only a slight impact on improving error detection.


Journal of Epidemiology and Community Health | 2007

Why do peer reviewers decline to review? A survey

Leanne Tite; Sara Schroter

Background: Peer reviewers are usually unpaid and their efforts not formally acknowledged. Some journals have difficulty finding appropriate reviewers able to complete timely reviews, resulting in publication delay. Objectives and methods: A survey of peer reviewers from five biomedical journals was conducted to determine why reviewers decline to review and their opinions on reviewer incentives. Items were scored on 5-point Likert scales, with low scores indicating low importance or low agreement. Results: 551/890 (62%) reviewers responded. Factors rated most highly in importance for the decision to accept to review a paper included contribution of the paper to subject area (mean 3.67 (standard deviation (SD) 86)), relevance of topic to own work (mean 3.46 (SD 0.99)) and opportunity to learn something new (mean 3.41 (SD 0.96)). The most highly rated factor important in the decision to decline to review was conflict with other workload (mean 4.06 (SD 1.31)). Most respondents agreed that financial incentives would not be effective when time constraints are prohibitive (mean 3.59 (SD 1.01)). However, reviewers agreed that non-financial incentives might encourage reviewers to accept requests to review: free subscription to journal content (mean 3.72 (SD 1.04)), annual acknowledgement on the journal’s website (mean 3.64 (SD 0.90)), more feedback about the outcome of the submission (mean 3.62 (SD 0.88)) and quality of the review (mean 3.60 (SD 0.89), and appointment of reviewers to the journal’s editorial board (mean 3.57 (SD 0.99)). Conclusion: Reviewers are more likely to accept to review a manuscript when it is relevant to their area of interest. Lack of time is the principal factor in the decision to decline. Reviewing should be formally recognised by academic institutions and journals should acknowledge reviewers’ work.


Journal of Medical Ethics | 2006

Reporting ethics committee approval and patient consent by study design in five general medical journals

Sara Schroter; R. Plowman; Andrew Hutchings; A Gonzalez

Background: Authors are required to describe in their manuscripts ethical approval from an appropriate committee and how consent was obtained from participants when research involves human participants. Objective: To assess the reporting of these protections for several study designs in general medical journals. Design: A consecutive series of research papers published in the Annals of Internal Medicine, BMJ, JAMA, Lancet and The New England Journal of Medicine between February and May 2003 were reviewed for the reporting of ethical approval and patient consent. Ethical approval, name of approving committee, type of consent, data source and whether the study used data collected as part of a study reported elsewhere were recorded. Differences in failure to report approval and consent by study design, journal and vulnerable study population were evaluated using multivariable logistic regression. Results: Ethical approval and consent were not mentioned in 31% and 47% of manuscripts, respectively. 88 (27%) papers failed to report both approval and consent. Failure to mention ethical approval or consent was significantly more likely in all study designs (except case–control and qualitative studies) than in randomised controlled trials (RCTs). Failure to mention approval was most common in the BMJ and was significantly more likely than in The New England Journal of Medicine. Failure to mention consent was most common in the BMJ and was significantly more likely than in all other journals. No significant differences in approval or consent were found when comparing studies of vulnerable and non-vulnerable participants. Conclusion: The reporting of ethical approval and consent in RCTs has improved, but journals are less good at reporting this information for other study designs. Journals should publish this information for all research on human participants.


BMJ | 2016

Qualitative research and The BMJ.

Elizabeth Loder; Trish Groves; Sara Schroter; José G. Merino; Wim Weber

A response to Greenhalgh and colleagues’ appeal for more


Learned Publishing | 2006

Financial support at the time of paper acceptance : a survey of three medical journals

Sara Schroter; Leanne Tite; Ahmed Kassem

The author‐pays model (open access publishing funded through author charges) is dependent on authors having access to financial support at the time their research papers are accepted. We conducted an author survey to determine the availability of external funding for publication charges at different points in the research process. Of the 377/524 (72%) who responded, 62% (233/377) received external funding to support their study, but with notable differences between journals. Only 25% (95/377) could withdraw funds from a grant at the time of paper acceptance. The grant was closed at this time for almost half (105/233, 45%) of those who were externally funded. Non‐externally funded research was largely supported through departmental resources (56%, 80/144) or carrying out research in own time (63%, 91/144). To conclude, a large proportion of published research is not externally funded, and many funded researchers do not have access to financial support at the time their paper is accepted for publication.


BMJ | 2016

Co-creating health: more than a dream

Tessa Richards; Rosamund Snow; Sara Schroter

The slow march towards true partnership with patients, which The BMJ champions, is progressing

Collaboration


Dive into the Sara Schroter's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Trish Groves

Group Health Cooperative

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Harry Hemingway

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ewout W. Steyerberg

Erasmus University Rotterdam

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge