Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mary E. W. Dankbaar is active.

Publication


Featured researches published by Mary E. W. Dankbaar.


Journal of Medical Internet Research | 2014

How to Systematically Assess Serious Games Applied to Health Care

Maurits Graafland; Mary E. W. Dankbaar; Agali Mert; Joep Lagro; Laura De Wit-Zuurendonk; Stephanie C. E. Schuit; Alma Schaafstal; Marlies P. Schijven

The usefulness and effectiveness of specific serious games in the medical domain is often unclear. This is caused by a lack of supporting evidence on validity of individual games, as well as a lack of publicly available information. Moreover, insufficient understanding of design principles among the individuals and institutions that develop or apply a medical serious game compromises their use. This article provides the first consensus-based framework for the assessment of specific medical serious games. The framework provides 62 items in 5 main themes, aimed at assessing a serious game’s rationale, functionality, validity, and data safety. This will allow caregivers and educators to make balanced choices when applying a serious game for healthcare purposes. Furthermore, the framework provides game manufacturers with standards for the development of new, valid serious games.


Simulation in healthcare : journal of the Society for Simulation in Healthcare | 2016

Preparing Residents Effectively in Emergency Skills Training With a Serious Game

Mary E. W. Dankbaar; Maartje Bakhuys Roozeboom; Esther Oprins; Frans Rutten; Jeroen J. G. van Merrienboer; Jan L. C. M. van Saase; Stephanie C. E. Schuit

Introduction Training emergency care skills is critical for patient safety but cost intensive. Serious games have been proposed as an engaging self-directed learning tool for complex skills. The objective of this study was to compare the cognitive skills and motivation of medical residents who only used a course manual as preparation for classroom training on emergency care with residents who used an additional serious game. Methods This was a quasi-experimental study with residents preparing for a rotation in the emergency department. The “reading” group received a course manual before classroom training; the “reading and game” group received this manual plus the game as preparation for the same training. Emergency skills were assessed before training (with residents who agreed to participate in an extra pretraining assessment), using validated competency scales and a global performance scale. We also measured motivation. Results All groups had comparable important characteristics (eg, experience with acute care). Before training, the reading and game group felt motivated to play the game and spent more self-study time (+2.5 hours) than the reading group. Game-playing residents showed higher scores on objectively measured and self-assessed clinical competencies but equal scores on the global performance scale and were equally motivated for training, compared with the reading group. After the 2-week training, no differences between groups existed. Conclusions After preparing training with an additional serious game, residents showed improved clinical competencies, compared with residents who only studied course material. After a 2-week training, this advantage disappeared. Future research should study the retention of game effects in blended designs.


PLOS ONE | 2014

Assessing the Assessment in Emergency Care Training

Mary E. W. Dankbaar; Karen M. Stegers-Jager; Frank Baarveld; Jeroen J. G. van Merriënboer; Geoff Norman; Frans Rutten; Jan L.C.M. van Saase; Stephanie C. E. Schuit

Objective Each year over 1.5 million health care professionals attend emergency care courses. Despite high stakes for patients and extensive resources involved, little evidence exists on the quality of assessment. The aim of this study was to evaluate the validity and reliability of commonly used formats in assessing emergency care skills. Methods Residents were assessed at the end of a 2-week emergency course; a subgroup was videotaped. Psychometric analyses were conducted to assess the validity and inter-rater reliability of the assessment instrument, which included a checklist, a 9-item competency scale and a global performance scale. Results A group of 144 residents and 12 raters participated in the study; 22 residents were videotaped and re-assessed by 8 raters. The checklists showed limited validity and poor inter-rater reliability for the dimensions “correct” and “timely” (ICC = .30 and.39 resp.). The competency scale had good construct validity, consisting of a clinical and a communication subscale. The internal consistency of the (sub)scales was high (α = .93/.91/.86). The inter-rater reliability was moderate for the clinical competency subscale (.49) and the global performance scale (.50), but poor for the communication subscale (.27). A generalizability study showed that for a reliable assessment 5–13 raters are needed when using checklists, and four when using the clinical competency scale or the global performance scale. Conclusions This study shows poor validity and reliability for assessing emergency skills with checklists but good validity and moderate reliability with clinical competency or global performance scales. Involving more raters can improve the reliability substantially. Recommendations are made to improve this high stakes skill assessment.


Games for health | 2014

Gaming as a training tool to train cognitive skills in Emergency Care: how effective is it?

Mary E. W. Dankbaar; Maartje Bakhuys Roozeboom; Esther Oprins; Frans Rutten; Jan L. C. M. van Saase; Jeroen J. G. van Merriënboer; Stephanie C. E. Schuit

Training emergency care skills is critical for patient safety and an essential part of medical education. Increasing demands on competences of doctors and limited training budgets necessitate new and cost-effective training methods. In the last decade, serious games have been propagated to train complex skills; they are expected to facilitate active, engaging and intrinsically motivated learning. Erasmus MC has developed a serious game to train emergency care skills, as a preparation for the faceto- face training. This ‘abcdeSIM’ game provides a realistic online emergency department environment where doctors can assess and stabilize patients in a virtual emergency department.


International Journal of Technology Enhanced Learning | 2015

The game-based learning evaluation model GEM: measuring the effectiveness of serious games using a standardised method

Esther Oprins; Gillian Visschedijk; Maartje Bakhuys Roozeboom; Mary E. W. Dankbaar; Wim Trooster; Stephanie C. E. Schuit

This article describes the background, design, and practical application of the game-based evaluation model GEM. The aim of this evaluation model is to measure the effectiveness of serious games in a practical way. GEM contains the methodology and indicators to be measured in validation research. Measuring generic learning and design indicators makes it possible to apply GEM on multiple games. Results provide insight in the reasons why serious games are effective. This evidence will help serious gaming designers to improve their games. Three empirical studies based on various serious games applied in different contexts show how GEM can be practically used and how these studies have contributed to the improvement of GEM.


BJS Open | 2018

Creation of a universal language for surgical procedures using the step-by-step framework: Step-by-step framework for standardizing surgical procedures

T. Nazari; E. J. Vlieger; Mary E. W. Dankbaar; J.J.G. van Merriënboer; J. F. Lange; T. Wiggers

Learning of surgical procedures is traditionally based on a master–apprentice model. Segmenting procedures into steps is commonly used to achieve an efficient manner of learning. Existing methods of segmenting procedures into steps, however, are procedure‐specific and not standardized, hampering their application across different specialties and thus worldwide uptake. The aim of this study was to establish consensus on the step‐by‐step framework for standardizing the segmentation of surgical procedures into steps.


Advances in Health Sciences Education | 2016

An experimental study on the effects of a simulation game on students’ clinical cognitive skills and motivation

Mary E. W. Dankbaar; Jelmer Alsma; Els E. H. Jansen; Jeroen J. G. van Merrienboer; Jan L. C. M. van Saase; Stephanie C. E. Schuit


Perspectives on medical education | 2014

A blended design in acute care training: similar learning results, less training costs compared with a traditional format

Mary E. W. Dankbaar; Diana J. Storm; Irene C. Teeuwen; Stephanie C. E. Schuit


Perspectives on medical education | 2014

Technology for learning: how it has changed education

Mary E. W. Dankbaar; Peter G. M. de Jong


Perspectives on medical education | 2017

Serious games and blended learning : Effects on performance and motivation in medical education

Mary E. W. Dankbaar

Collaboration


Dive into the Mary E. W. Dankbaar's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Diana J. Storm

Erasmus University Rotterdam

View shared research outputs
Top Co-Authors

Avatar

Els E. H. Jansen

Erasmus University Rotterdam

View shared research outputs
Top Co-Authors

Avatar

Gerrie Prins

Erasmus University Medical Center

View shared research outputs
Top Co-Authors

Avatar

Irene C. Teeuwen

Erasmus University Rotterdam

View shared research outputs
Top Co-Authors

Avatar

J. F. Lange

Erasmus University Rotterdam

View shared research outputs
Researchain Logo
Decentralizing Knowledge