Stefan K. Schauber
Charité
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Stefan K. Schauber.
Medical Education | 2012
Zineb Miriam Nouns; Stefan K. Schauber; Claudia M. Witt; Halina Kingreen; Katrin Schüttpelz-Brauns
Medical Education 2012: 46: 1206–1214
JAMA | 2015
Wolf E. Hautz; Juliane E. Kämmer; Stefan K. Schauber; Claudia Spies; Wolfgang Gaissmaier
Diagnostic errors contribute substantially to preventable medical error.1 Cognitive error is among the leading causes and mostly results from faulty data synthesis.2 Furthermore, reflecting on their confidence does not prevent physicians from committing diagnostic errors.1 Diagnostic decisions usually are not made by individual physicians working alone. Our aim was to investigate the effect of working in pairs as opposed to alone on diagnostic performance.
Medical Education | 2013
Stefan K. Schauber; Martin Hecht; Zineb Miriam Nouns; Susanne Dettmer
Basic science teaching in undergraduate medical education faces several challenges. One prominent discussion is focused on the relevance of biomedical knowledge to the development and integration of clinical knowledge. Although the value of basic science knowledge is generally emphasised, theoretical positions on the relative role of this knowledge and the optimal approach to its instruction differ. The present paper addresses whether and to what extent biomedical knowledge is related to the development of clinical knowledge.
Medical Teacher | 2010
Stefan K. Schauber; Zineb Miriam Nouns
The Berlin Progress Test has grown to a cooperation of 13 universities. Recently, comparisons between the participating schools became an area of high interest. Muijtjens et al. [Muijtjens AM, Schuwirth LWT, Cohen-Schotanus J, Thoben AJNM, van der Vleuten CPM. 2008a. Benchmarking by cross-institutional comparison of student achievement in a progress test. Med Educ 41(1):82–88; Muijtjens AM, Schuwirth LWT, Cohen-Schotanus J, van der Vleuten CPM. 2008b. Differences in knowledge development exposed by multi-curricular progress test data. Adv Health Sci Educ 13:593–605] proposed a method for cross-institutional benchmarking based on progress test data. Progress testing has some major advantages as it delivers longitudinal information about students growth of knowledge. By adopting the procedure of Muijtjens et al. (, b), we were able to replicate the basic characteristics of the cumulative deviation method. Besides the advantages of the method, there are some difficulties as errors of measurement are not independent, which violates the premises of testing statistical differences.
Medical Teacher | 2012
Yassin Karay; Stefan K. Schauber; Christoph Stosch; Katrin Schuettpelz-Brauns
Background Students’ motivation to participate is one of the main challenges in formative assessment. The utility framework identifies potential points of intervention for improving the acceptance of formative assessment [Van Der Vleuten C. 1996. The assessment of professional competence: Developments, research and practical implications. Adv Health Sci Educ 1(1):41–67]. At the Medical Faculty of the University of Cologne, the paper-based version of the Berlin Progress Test has been transformed into computer-based version providing immediate feedback. Aim To investigate whether the introduction of computer-based assessment (CBA) enhances the acceptance of formative assessment relative to paper-based assessment (PBA). Methods In a retrospective cohort study (PBA: N = 2597, CBA: N = 2712), students’ overall acceptance of the two forms of assessment was surveyed, their comments were analyzed, and we analyzed their test behavior and categorized students into “serious” or “non-serious” test takers. Results In the preclinical phase of medical education, no differences were found in overall acceptance of the two forms of assessment (p > 0.05). In the clinical phase, differences in favor of CBA were found in overall acceptance (p < 0.05), the proportion of positive comments (p < 0.001), and the proportion of serious participants (p < 0.001). Conclusions Introduction of immediate feedback via CBA can enhance the acceptance and therefore the utility of formative assessment.
BMJ Open | 2016
Stefanie C. Hautz; Luca Schuler; Juliane E. Kämmer; Stefan K. Schauber; Meret E. Ricklin; Thomas Sauter; Volker Maier; Tanja Birrenbach; Aristomenis K. Exadaktylos; Wolf E. Hautz
Introduction Emergency rooms (ERs) generally assign a preliminary diagnosis to patients, who are then hospitalised and may subsequently experience a change in their lead diagnosis (cDx). In ERs, the cDx rate varies from around 15% to more than 50%. Among the most frequent reasons for diagnostic errors are cognitive slips, which mostly result from faulty data synthesis. Furthermore, physicians have been repeatedly found to be poor self-assessors and to be overconfident in the quality of their diagnosis, which limits their ability to improve. Therefore, some of the clinically most relevant research questions concern how diagnostic decisions are made, what determines their quality and what can be done to improve them. Research that addresses these questions is, however, still rare. In particular, field studies that allow for generalising findings from controlled experimental settings are lacking. The ER, with its high throughput and its many simultaneous visits, is perfectly suited for the study of factors contributing to diagnostic error. With this study, we aim to identify factors that allow prediction of an ERs diagnostic performance. Knowledge of these factors as well as of their relative importance allows for the development of organisational, medical and educational strategies to improve the diagnostic performance of ERs. Methods and analysis We will conduct a field study by collecting diagnostic decision data, physician confidence and a number of influencing factors in a real-world setting to model real-world diagnostic decisions and investigate the adequacy, validity and informativeness of physician confidence in these decisions. We will specifically collect data on patient, physician and encounter factors as predictors of the dependent variables. Statistical methods will include analysis of variance and a linear mixed-effects model. Ethics and dissemination The Bern ethics committee approved the study under KEK Number 197/15. Results will be published in peer-reviewed scientific medical journals. Authorship will be determined according to ICMJE guidelines. Trial registration number The study protocol Version 1.0 from 17 May 2015 is registered in the Inselspital Research Database Information System (IRDIS) and with the IRB (‘Kantonale Ethikkomission’) Bern under KEK Number 197/15.
Teaching and Learning in Medicine | 2015
Yassin Karay; Stefan K. Schauber; Christoph Stosch; Katrin Schüttpelz-Brauns
Construct: In this study, we examine the differences in test performance between the paper-based and the computer-based version of the Berlin formative Progress Test. In this context it is the first study that allows controlling for students’ prior performance. Background: Computer-based tests make possible a more efficient examination procedure for test administration and review. Although university staff will benefit largely from computer-based tests, the question arises if computer-based tests influence students’ test performance. Approach: A total of 266 German students from the 9th and 10th semester of medicine (comparable with the 4th-year North American medical school schedule) participated in the study (paper = 132, computer = 134). The allocation of the test format was conducted as a randomized matched-pair design in which students were first sorted according to their prior test results. The organizational procedure, the examination conditions, the room, and seating arrangements, as well as the order of questions and answers, were identical in both groups. Results: The sociodemographic variables and pretest scores of both groups were comparable. The test results from the paper and computer versions did not differ. The groups remained within the allotted time, but students using the computer version (particularly the high performers) needed significantly less time to complete the test. In addition, we found significant differences in guessing behavior. Low performers using the computer version guess significantly more than low-performing students in the paper-pencil version. Conclusions: Participants in computer-based tests are not at a disadvantage in terms of their test results. The computer-based test required less processing time. The reason for the longer processing time when using the paper-pencil version might be due to the time needed to write the answer down, controlling for transferring the answer correctly. It is still not known why students using the computer version (particularly low-performing students) guess at a higher rate. Further studies are necessary to understand this finding.
Advances in Health Sciences Education | 2018
Stefan K. Schauber; Martin Hecht; Zineb Miriam Nouns
Abstract Despite the frequent use of state-of-the-art psychometric models in the field of medical education, there is a growing body of literature that questions their usefulness in the assessment of medical competence. Essentially, a number of authors raised doubt about the appropriateness of psychometric models as a guiding framework to secure and refine current approaches to the assessment of medical competence. In addition, an intriguing phenomenon known as case specificity is specific to the controversy on the use of psychometric models for the assessment of medical competence. Broadly speaking, case specificity is the finding of instability of performances across clinical cases, tasks, or problems. As stability of performances is, generally speaking, a central assumption in psychometric models, case specificity may limit their applicability. This has probably fueled critiques of the field of psychometrics with a substantial amount of potential empirical evidence. This article aimed to explain the fundamental ideas employed in psychometric theory, and how they might be problematic in the context of assessing medical competence. We further aimed to show why and how some critiques do not hold for the field of psychometrics as a whole, but rather only for specific psychometric approaches. Hence, we highlight approaches that, from our perspective, seem to offer promising possibilities when applied in the assessment of medical competence. In conclusion, we advocate for a more differentiated view on psychometric models and their usage.
Archive | 2014
Christiane Siegling-Vlitakis; Stephan Birk; Anita Kröger; Cyrill Matenaers; Christiane Beitz-Radzio; Carsten Staszyk; Stefan Arnhold; Birte Pfeiffer-Morhenn; Thomas Vahlenkamp; Christoph Mülling; Evelyn Bergsmann; Christian Gruber; Peter Stucki; Marietta Schönmann; Zineb Miriam Nouns; Stefan K. Schauber; Sebastian Schubert; Jan P. Ehlers
Studierende der Tiermedizin fokussieren sich im Studium oft auf den blosen Wissenserwerb und nehmen dabei weniger wahr, was sie bereits erreicht haben. Mit dem Progress Test Tiermedizin (PTT) kann der Lernfortschritt von Studienanfang bis zum Erreichen des Berufsabschlusses als Tierarztin/Tierarzt dargestellt werden. Das Konzept des PTT soll in diesem Artikel erlautert werden
Medical Education | 2014
Stefan K. Schauber; Lennart Schalk
of debriefing in simulation-based learning. Simul Healthc 2007;2:115– 25. 9 Raemer D, Anderson M, Cheng A, Fanning R, Nadkarni V, Savoldelli G. Research regarding debriefing as part of the learning process. Simul Healthc 2011;6 (Suppl):52–7. 10 McGaghie WC, Issenberg SB, Petrusa ER, Scalese RJ. A critical review of simulation-based medical education research: 2003–2009. Med Educ 2010;44:50–63. 11 Merrill D. First principles of instruction. Educ Technol Res Dev 2002;50:43–59. 12 Schank R, Berman T, Macpherson K. Learning by doing. In: Reigeluth CM, ed. Instructional Design Theories and Models Volume II: A New Paradigm of Instructional Theory. Mahwah, NJ: Lawrence Erlbaum Associates 1999;161–81. 13 Cheng A, Eppich W, Grant V, Sherbino J, Zendejas B, Cook DA. Debriefing for technologyenhanced simulation: a systematic review and meta-analysis. Med Educ 2014;48:657–66.