Ann Sofia Skou Thomsen
University of Copenhagen
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ann Sofia Skou Thomsen.
Canadian Medical Association Journal | 2013
Asbjørn Hróbjartsson; Ann Sofia Skou Thomsen; Frida Emanuelsson; Britta Tendal; Jørgen Hilden; Isabelle Boutron; Philippe Ravaud; Stig Brorson
Background: Clinical trials are commonly done without blinded outcome assessors despite the risk of bias. We wanted to evaluate the effect of nonblinded outcome assessment on estimated effects in randomized clinical trials with outcomes that involved subjective measurement scales. Methods: We conducted a systematic review of randomized clinical trials with both blinded and nonblinded assessment of the same measurement scale outcome. We searched PubMed, EMBASE, PsycINFO, CINAHL, Cochrane Central Register of Controlled Trials, HighWire Press and Google Scholar for relevant studies. Two investigators agreed on the inclusion of trials and the outcome scale. For each trial, we calculated the difference in effect size (i.e., standardized mean difference between nonblinded and blinded assessments). A difference in effect size of less than 0 suggested that nonblinded assessors generated more optimistic estimates of effect. We pooled the differences in effect size using inverse variance random-effects meta-analysis and used metaregression to identify potential reasons for variation. Results: We included 24 trials in our review. The main meta-analysis included 16 trials (involving 2854 patients) with subjective outcomes. The estimated treatment effect was more beneficial when based on nonblinded assessors (pooled difference in effect size −0.23 [95% confidence interval (CI) −0.40 to −0.06]). In relative terms, nonblinded assessors exaggerated the pooled effect size by 68% (95% CI 14% to 230%). Heterogeneity was moderate (I2 = 46%, p = 0.02) and unexplained by metaregression. Interpretation: We provide empirical evidence for observer bias in randomized clinical trials with subjective measurement scale outcomes. A failure to blind assessors of outcomes in such trials results in a high risk of substantial bias.
BMJ | 2012
Asbjørn Hróbjartsson; Ann Sofia Skou Thomsen; Frida Emanuelsson; Britta Tendal; Jørgen Hilden; Isabelle Boutron; Philippe Ravaud; Stig Brorson
Objective To evaluate the impact of non-blinded outcome assessment on estimated treatment effects in randomised clinical trials with binary outcomes. Design Systematic review of trials with both blinded and non-blinded assessment of the same binary outcome. For each trial we calculated the ratio of the odds ratios—the odds ratio from non-blinded assessments relative to the corresponding odds ratio from blinded assessments. A ratio of odds ratios <1 indicated that non-blinded assessors generated more optimistic effect estimates than blinded assessors. We pooled the individual ratios of odds ratios with inverse variance random effects meta-analysis and explored reasons for variation in ratios of odds ratios with meta-regression. We also analysed rates of agreement between blinded and non-blinded assessors and calculated the number of patients needed to be reclassified to neutralise any bias. Data Sources PubMed, Embase, PsycINFO, CINAHL, Cochrane Central Register of Controlled Trials, HighWire Press, and Google Scholar. Eligibility criteria for selecting studies Randomised clinical trials with blinded and non-blinded assessment of the same binary outcome. Results We included 21 trials in the main analysis (with 4391 patients); eight trials provided individual patient data. Outcomes in most trials were subjective—for example, qualitative assessment of the patient’s function. The ratio of the odds ratios ranged from 0.02 to 14.4. The pooled ratio of odds ratios was 0.64 (95% confidence interval 0.43 to 0.96), indicating an average exaggeration of the non-blinded odds ratio by 36%. We found no significant association between low ratios of odds ratios and scores for outcome subjectivity (P=0.27); non-blinded assessor’s overall involvement in the trial (P=0.60); or outcome vulnerability to non-blinded patients (P=0.52). Blinded and non-blinded assessors agreed in a median of 78% of assessments (interquartile range 64-90%) in the 12 trials with available data. The exaggeration of treatment effects associated with non-blinded assessors was induced by the misclassification of a median of 3% of the assessed patients per trial (1-7%). Conclusions On average, non-blinded assessors of subjective binary outcomes generated substantially biased effect estimates in randomised clinical trials, exaggerating odds ratios by 36%. This bias was compatible with a high rate of agreement between blinded and non-blinded outcome assessors and driven by the misclassification of few patients.
International Journal of Epidemiology | 2014
Asbjørn Hróbjartsson; Frida Emanuelsson; Ann Sofia Skou Thomsen; Jørgen Hilden; Stig Brorson
BACKGROUND Blinding patients in clinical trials is a key methodological procedure, but the expected degree of bias due to nonblinded patients on estimated treatment effects is unknown. METHODS Systematic review of randomized clinical trials with one sub-study (i.e. experimental vs control) involving blinded patients and another, otherwise identical, sub-study involving nonblinded patients. Within each trial, we compared the difference in effect sizes (i.e. standardized mean differences) between the sub-studies. A difference <0 indicates that nonblinded patients generated a more optimistic effect estimate. We pooled the differences with random-effects inverse variance meta-analysis, and explored reasons for heterogeneity. RESULTS Our main analysis included 12 trials (3869 patients). The average difference in effect size for patient-reported outcomes was -0.56 (95% confidence interval -0.71 to -0.41), (I(2)=60%, P=0.004), i.e. nonblinded patients exaggerated the effect size by an average of 0.56 standard deviation, but with considerable variation. Two of the 12 trials also used observer-reported outcomes, showing no indication of exaggerated effects due lack of patient blinding. There was a larger effect size difference in 10 acupuncture trials [-0.63 (-0.77 to -0.49)], than in the two non-acupuncture trials [-0.17 (-0.41 to 0.07)]. Lack of patient blinding also increased attrition and use of co-interventions: ratio of control group attrition risk 1.79 (1.18 to 2.70), and ratio of control group co-intervention risk 1.55 (0.99 to 2.43). CONCLUSIONS This study provides empirical evidence of pronounced bias due to lack of patient blinding in complementary/alternative randomized clinical trials with patient-reported outcomes.
Acta Ophthalmologica | 2015
Ann Sofia Skou Thomsen; Jens Folke Kiilgaard; Hadi Kjærbo; Morten la Cour; Lars Konge
To evaluate the EyeSi™ simulator in regard to assessing competence in cataract surgery. The primary objective was to explore all simulator metrics to establish a proficiency‐based test with solid evidence. The secondary objective was to evaluate whether the skill assessment was specific to cataract surgery.
Ophthalmology | 2015
Ann Sofia Skou Thomsen; Yousif Subhi; Jens Folke Kiilgaard; Morten la Cour; Lars Konge
TOPIC This study reviews the evidence behind simulation-based surgical training of ophthalmologists to determine (1) the validity of the reported models and (2) the ability to transfer skills to the operating room. CLINICAL RELEVANCE Simulation-based training is established widely within ophthalmology, although it often lacks a scientific basis for implementation. METHODS We conducted a systematic review of trials involving simulation-based training or assessment of ophthalmic surgical skills among health professionals. The search included 5 databases (PubMed, EMBASE, PsycINFO, Cochrane Library, and Web of Science) and was completed on March 1, 2014. Overall, the included trials were divided into animal, cadaver, inanimate, and virtual-reality models. Risk of bias was assessed using the Cochrane Collaborations tool. Validity evidence was evaluated using a modern validity framework (Messicks). RESULTS We screened 1368 reports for eligibility and included 118 trials. The most common surgery simulated was cataract surgery. Most validity trials investigated only 1 or 2 of 5 sources of validity (87%). Only 2 trials (48 participants) investigated transfer of skills to the operating room; 4 trials (65 participants) evaluated the effect of simulation-based training on patient-related outcomes. Because of heterogeneity of the studies, it was not possible to conduct a quantitative analysis. CONCLUSIONS The methodologic rigor of trials investigating simulation-based surgical training in ophthalmology is inadequate. To ensure effective implementation of training models, evidence-based knowledge of validity and efficacy is needed. We provide a useful tool for implementation and evaluation of research in simulation-based training.
Jmir mhealth and uhealth | 2015
Yousif Subhi; Sarah Bube; Signe Rolskov Bojsen; Ann Sofia Skou Thomsen; Lars Konge
Background Both clinicians and patients use medical mobile phone apps. Anyone can publish medical apps, which leads to contents with variable quality that may have a serious impact on human lives. We herein provide an overview of the prevalence of expert involvement in app development and whether or not app contents adhere to current medical evidence. Objective To systematically review studies evaluating expert involvement or adherence of app content to medical evidence in medical mobile phone apps. Methods We systematically searched 3 databases (PubMed, The Cochrane Library, and EMBASE), and included studies evaluating expert involvement or adherence of app content to medical evidence in medical mobile phone apps. Two authors performed data extraction independently. Qualitative analysis of the included studies was performed. Results Based on inclusion criteria, 52 studies were included in this review. These studies assessed a total of 6520 apps. Studies dealt with a variety of medical specialties and topics. As much as 28 studies assessed expert involvement, which was found in 9-67% of the assessed apps. Thirty studies (including 6 studies that also assessed expert involvement) assessed adherence of app content to current medical evidence. Thirteen studies found that 10-87% of the assessed apps adhered fully to the compared evidence (published studies, recommendations, and guidelines). Seventeen studies found that none of the assessed apps (n=2237) adhered fully to the compared evidence. Conclusions Most medical mobile phone apps lack expert involvement and do not adhere to relevant medical evidence.
Acta Ophthalmologica | 2017
Ann Sofia Skou Thomsen; Phillip Smith; Yousif Subhi; Morten la Cour; Lilian Tang; George M. Saleh; Lars Konge
To investigate the correlation in performance of cataract surgery between a virtual‐reality simulator and real‐life surgery using two objective assessment tools with evidence of validity.
Acta Ophthalmologica | 2017
Ann Sofia Skou Thomsen; Jens Folke Kiilgaard; Morten la Cour; Ryan Brydges; Lars Konge
To investigate how experience in simulated cataract surgery impacts and transfers to the learning curves for novices in vitreoretinal surgery.
Clinical Ophthalmology | 2016
Nanna Jo Borgersen; Mikael Johannes Vuokko Henriksen; Lars Konge; Torben Lykke Sørensen; Ann Sofia Skou Thomsen; Yousif Subhi
Background Direct ophthalmoscopy is well-suited for video-based instruction, particularly if the videos enable the student to see what the examiner sees when performing direct ophthalmoscopy. We evaluated the pedagogical effectiveness of instructional YouTube videos on direct ophthalmoscopy by evaluating their content and approach to visualization. Methods In order to synthesize main themes and points for direct ophthalmoscopy, we formed a broad panel consisting of a medical student, junior and senior physicians, and took into consideration book chapters targeting medical students and physicians in general. We then systematically searched YouTube. Two authors reviewed eligible videos to assess eligibility and extract data on video statistics, content, and approach to visualization. Correlations between video statistics and contents were investigated using two-tailed Spearman’s correlation. Results We screened 7,640 videos, of which 27 were found eligible for this study. Overall, a median of 12 out of 18 points (interquartile range: 8–14 key points) were covered; no videos covered all of the 18 points assessed. We found the most difficulties in the approach to visualization of how to approach the patient and how to examine the fundus. Time spent on fundus examination correlated with the number of views per week (Spearman’s ρ=0.53; P=0.029). Conclusion Videos may help overcome the pedagogical issues in teaching direct ophthalmoscopy; however, the few available videos on YouTube fail to address this particular issue adequately. There is a need for high-quality videos that include relevant points, provide realistic visualization of the examiner’s view, and give particular emphasis on fundus examination.
Acta Ophthalmologica | 2018
Ann Sofia Skou Thomsen; Morten la Cour; Charlotte Paltved; Karen Lindorff-Larsen; Bjørn Ulrik Nielsen; Lars Konge; Leizl Joy Nayahangan
The number of available simulation‐based models for technical skills training in ophthalmology is rapidly increasing, and development of training programmes around these procedures should follow a structured approach. The aim of this study was to identify all technical procedures that should be integrated in a simulation‐based curriculum in ophthalmology.