Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zineb Miriam Nouns is active.

Publication


Featured researches published by Zineb Miriam Nouns.


Medical Teacher | 2010

Progress testing internationally

Adrian Freeman; Cees van der Vleuten; Zineb Miriam Nouns; Chris Ricketts

curriculum) are given the same written test. The test is comprehensive by sampling all relevant disciplines in a curriculum usually determined by a fixed blueprint. Because of the need for wide sampling, items are typically of the multiple-choice type. The test is repeated regularly in time. The same blueprint is used but test items are usually different with every test occasion. So, a comprehensive (across many content areas) cross-sectional (comparing performance of different groups of ability) and longitudinal (comparing performance in time) picture is obtained of the knowledge of learners in relation to the end objectives of a curriculum. This is the first time that so many contributions on progress testing are being published together. Although progress testing was initially developed in the 1970s by two institutions independently, it has taken quite a long time for other schools to adopt this very special testing procedure. The reason is probably twofold. First and foremost, the utility of the testing procedure is not easily understood. The concept of testing is so different from our usual courserelated testing that it takes time to really understand the ideas behind it and see the potential benefits that may result from progress testing. The second reason is that progress testing can be logistically burdensome. It requires considerable effort for test development, test administration and test scoring. The resources and the centralized governance required are probably major obstacles for many institutions to engage in progress testing. Nevertheless, in the recent years, an increasing number of medical schools and other institutions have gained interest in progress testing and are using the method. In order to exchange information and experiences, a symposium on progress testing was held at International Association of Medical Education (AMEE) 2009. Medical Teacher offered space for subsequent publishing of papers which you will find in this issue. By the way of


Medical Teacher | 2010

Progress testing in German speaking countries

Zineb Miriam Nouns; Waltraud Georg

Progress testing was introduced in 1999 at the Charité–Universitätsmedizin Berlin. This Berlin progress test medizin (PTM) started to cooperate with other Medical Schools in 2000. The cooperation grew continuously and now 13 Medical schools in Germany and Austria take part, including more than 8500 Students. This article focuses on the concept and quality of the PTM and the benefits for students and medical schools. It shows how an initial small student initiative has developed into a successful international cooperation of formative testing in medical education.


Medical Education | 2012

Development of knowledge in basic sciences: a comparison of two medical curricula

Zineb Miriam Nouns; Stefan K. Schauber; Claudia M. Witt; Halina Kingreen; Katrin Schüttpelz-Brauns

Medical Education 2012: 46: 1206–1214


Medical Education | 2013

On the role of biomedical knowledge in the acquisition of clinical knowledge

Stefan K. Schauber; Martin Hecht; Zineb Miriam Nouns; Susanne Dettmer

Basic science teaching in undergraduate medical education faces several challenges. One prominent discussion is focused on the relevance of biomedical knowledge to the development and integration of clinical knowledge. Although the value of basic science knowledge is generally emphasised, theoretical positions on the relative role of this knowledge and the optimal approach to its instruction differ. The present paper addresses whether and to what extent biomedical knowledge is related to the development of clinical knowledge.


Medical Teacher | 2010

Using the cumulative deviation method for cross-institutional benchmarking in the Berlin progress test

Stefan K. Schauber; Zineb Miriam Nouns

The Berlin Progress Test has grown to a cooperation of 13 universities. Recently, comparisons between the participating schools became an area of high interest. Muijtjens et al. [Muijtjens AM, Schuwirth LWT, Cohen-Schotanus J, Thoben AJNM, van der Vleuten CPM. 2008a. Benchmarking by cross-institutional comparison of student achievement in a progress test. Med Educ 41(1):82–88; Muijtjens AM, Schuwirth LWT, Cohen-Schotanus J, van der Vleuten CPM. 2008b. Differences in knowledge development exposed by multi-curricular progress test data. Adv Health Sci Educ 13:593–605] proposed a method for cross-institutional benchmarking based on progress test data. Progress testing has some major advantages as it delivers longitudinal information about students growth of knowledge. By adopting the procedure of Muijtjens et al. (, b), we were able to replicate the basic characteristics of the cumulative deviation method. Besides the advantages of the method, there are some difficulties as errors of measurement are not independent, which violates the premises of testing statistical differences.


GMS Zeitschrift für medizinische Ausbildung | 2010

Using the Progress Test Medizin (PTM) for evaluation of the Medical Curriculum Munich (MeCuM)

Ralf Schmidmaier; Matthias Holzer; Matthias Angstwurm; Zineb Miriam Nouns; Martin Reincke; Martin R. Fischer

Aims: The Medical Curriculum Munich (MeCuM) has been implemented since 2004 and was completely established in 2007. In this study the clinical part of MeCuM was evaluated with respect to retention of the knowledge in internal medicine (learning objectives of the 6th/7th semester). Methods: In summer of 2009 and winter of 2009/2010 1065 students participated in the Progress Test Medizin (PTM) from Charité Medical School Berlin. Additionally the students answered a questionnaire regarding the acceptance and rating of the progress test and basic demographic data. Results: The knowledge of internal medicine continuously increases during the clinical part of the medical curriculum in Munich. However, significant differences between the sub-disciplines of internal medicine could be observed. The overall acceptance of the PTM was high and increased further with the study progress. Interestingly, practical experiences like clinical clerkships positively influenced the test score. Conclusions: The PTM is a useful tool for the evaluation of knowledge retention in a specific curriculum.


PLOS ONE | 2018

The educational impact of Mini-Clinical Evaluation Exercise (Mini-CEX) and Direct Observation of Procedural Skills (DOPS) and its association with implementation: A systematic review and meta-analysis

Andrea Carolin Lörwald; Felicitas-Maria Lahner; Zineb Miriam Nouns; Christoph Berendonk; John J. Norcini; Robert Greif; Sören Huwendiek

Introduction Mini Clinical Evaluation Exercise (Mini-CEX) and Direct Observation of Procedural Skills (DOPS) are used as formative assessments worldwide. Since an up-to-date comprehensive synthesis of the educational impact of Mini-CEX and DOPS is lacking, we performed a systematic review. Moreover, as the educational impact might be influenced by characteristics of the setting in which Mini-CEX and DOPS take place or their implementation status, we additionally investigated these potential influences. Methods We searched Scopus, Web of Science, and Ovid, including All Ovid Journals, Embase, ERIC, Ovid MEDLINE(R), and PsycINFO, for original research articles investigating the educational impact of Mini-CEX and DOPS on undergraduate and postgraduate trainees from all health professions, published in English or German from 1995 to 2016. Educational impact was operationalized and classified using Barr’s adaptation of Kirkpatrick’s four-level model. Where applicable, outcomes were pooled in meta-analyses, separately for Mini-CEX and DOPS. To examine potential influences, we used Fisher’s exact test for count data. Results We identified 26 articles demonstrating heterogeneous effects of Mini-CEX and DOPS on learners’ reactions (Kirkpatrick Level 1) and positive effects of Mini-CEX and DOPS on trainees’ performance (Kirkpatrick Level 2b; Mini-CEX: standardized mean difference (SMD) = 0.26, p = 0.014; DOPS: SMD = 3.33, p<0.001). No studies were found on higher Kirkpatrick levels. Regarding potential influences, we found two implementation characteristics, “quality” and “participant responsiveness”, to be associated with the educational impact. Conclusions Despite the limited evidence, the meta-analyses demonstrated positive effects of Mini-CEX and DOPS on trainee performance. Additionally, we revealed implementation characteristics to be associated with the educational impact. Hence, we assume that considering implementation characteristics could increase the educational impact of Mini-CEX and DOPS.


Advances in Health Sciences Education | 2018

Why assessment in medical education needs a solid foundation in modern test theory

Stefan K. Schauber; Martin Hecht; Zineb Miriam Nouns

Abstract Despite the frequent use of state-of-the-art psychometric models in the field of medical education, there is a growing body of literature that questions their usefulness in the assessment of medical competence. Essentially, a number of authors raised doubt about the appropriateness of psychometric models as a guiding framework to secure and refine current approaches to the assessment of medical competence. In addition, an intriguing phenomenon known as case specificity is specific to the controversy on the use of psychometric models for the assessment of medical competence. Broadly speaking, case specificity is the finding of instability of performances across clinical cases, tasks, or problems. As stability of performances is, generally speaking, a central assumption in psychometric models, case specificity may limit their applicability. This has probably fueled critiques of the field of psychometrics with a substantial amount of potential empirical evidence. This article aimed to explain the fundamental ideas employed in psychometric theory, and how they might be problematic in the context of assessing medical competence. We further aimed to show why and how some critiques do not hold for the field of psychometrics as a whole, but rather only for specific psychometric approaches. Hence, we highlight approaches that, from our perspective, seem to offer promising possibilities when applied in the assessment of medical competence. In conclusion, we advocate for a more differentiated view on psychometric models and their usage.


Archive | 2015

Developing an alternative response format for the script concordance test

Felicitas-Maria Lahner; Zineb Miriam Nouns; Sören Huwendiek

Introduction: Clinical reasoning is essential for the practice of medicine. In theory of development of medical expertise it is stated, that clinical reasoning starts from analytical processes namely the storage of isolated facts and the logical application of the ‘rules’ of diagnosis. Then the learners successively develop so called semantic networks and illness-scripts which finally are used in an intuitive non-analytic fashion [1], [2]. The script concordance test (SCT) is an example for assessing clinical reasoning [3]. However the aggregate scoring [3] of the SCT is recognized as problematic [4]. The SCT`s scoring leads to logical inconsistencies and is likely to reflect construct-irrelevant differences in examinees’ response styles [4]. Also the expert panel judgments might lead to an unintended error of measurement [4]. In this PhD project the following research questions will be addressed: 1. How does a format look like to assess clinical reasoning (similar to the SCT but) with multiple true-false questions or other formats with unambiguous correct answers, and by this address the above mentioned pitfalls in traditional scoring of the SCT? 2. How well does this format fulfill the Ottawa criteria for good assessment, with special regards to educational and catalytic effects [5]? Methods: 1. In a first study it shall be assessed whether designing a new format using multiple true-false items to assess clinical reasoning similar to the SCT-format is arguable in a theoretically and practically sound fashion. For this study focus groups or interviews with assessment experts and students will be undertaken. 2. In an study using focus groups and psychometric data Norcini`s and colleagues Criteria for Good Assessment [5] shall be determined for the new format in a real assessment. Furthermore the scoring method for this new format shall be optimized using real and simulated data.


Medical Education | 2015

Losing connectivity when using EHRs: a technological or an educational problem?

Zineb Miriam Nouns; Stephanie Montagne; Sören Huwendiek

In this issue, the article ‘The impact of adopting EHRs: how losing connectivity affects clinical reasoning’, by Varpio et al., investigates how clinicians experienced the move from a paper-based health record to an electronic health record (EHR) and how this affected clinical reasoning. They found that previously chronological and interconnected data were replaced by data points that were ‘largely chronologically and contextually isolated’. The study participants experienced this as a loss of clinical reasoning support mechanisms. They no longer knew the patients’ evolving status and reported an increased cognitive workload. In their extensive constructivist grounded theory study, the authors found that the negative effect on clinical reasoning could be attributed to a loss of connectivity when using the EHR.

Collaboration


Dive into the Zineb Miriam Nouns's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Martin Hecht

Humboldt University of Berlin

View shared research outputs
Researchain Logo
Decentralizing Knowledge