Anouk van der Gijp
Utrecht University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Anouk van der Gijp.
European Journal of Radiology | 2015
Cécile J. Ravesloot; Marieke van der Schaaf; Jan P.J. van Schaik; Olle ten Cate; Anouk van der Gijp; Christian P. Mol; Koen L. Vincken
RATIONALE AND OBJECTIVESnCurrent radiology practice increasingly involves interpretation of volumetric data sets. In contrast, most radiology tests still contain only 2D images. We introduced a new testing tool that allows for stack viewing of volumetric images in our undergraduate radiology program. We hypothesized that tests with volumetric CT-images enhance test quality, in comparison with traditional completely 2D image-based tests, because they might better reflect required skills for clinical practice.nnnMATERIALS AND METHODSnTwo groups of medical students (n=139; n=143), trained with 2D and volumetric CT-images, took a digital radiology test in two versions (A and B), each containing both 2D and volumetric CT-image questions. In a questionnaire, they were asked to comment on the representativeness for clinical practice, difficulty and user-friendliness of the test questions and testing program. Students test scores and reliabilities, measured with Cronbachs alpha, of 2D and volumetric CT-image tests were compared.nnnRESULTSnEstimated reliabilities (Cronbachs alphas) were higher for volumetric CT-image scores (version A: .51 and version B: .54), than for 2D CT-image scores (version A: .24 and version B: .37). Participants found volumetric CT-image tests more representative of clinical practice, and considered them to be less difficult than volumetric CT-image questions. However, in one version (A), volumetric CT-image scores (M 80.9, SD 14.8) were significantly lower than 2D CT-image scores (M 88.4, SD 10.4) (p<.001). The volumetric CT-image testing program was considered user-friendly.nnnCONCLUSIONnThis study shows that volumetric image questions can be successfully integrated in students radiology testing. Results suggests that the inclusion of volumetric CT-images might improve the quality of radiology tests by positively impacting perceived representativeness for clinical practice and increasing reliability of the test.
Academic Radiology | 2015
Anouk van der Gijp; Cécile J. Ravesloot; Marieke van der Schaaf; Irene C. van der Schaaf; Josephine C.B.M. Huige; Koen L. Vincken; Olle ten Cate; Jan P.J. van Schaik
RATIONALE AND OBJECTIVESnIn current practice, radiologists interpret digital images, including a substantial amount of volumetric images. We hypothesized that interpretation of a stack of a volumetric data set demands different skills than interpretation of two-dimensional (2D) cross-sectional images. This study aimed to investigate and compare knowledge and skills used for interpretation of volumetric versus 2D images.nnnMATERIALS AND METHODSnTwenty radiology clerks were asked to think out loud while reading four or five volumetric computed tomography (CT) images in stack mode and four or five 2D CT images. Cases were presented in a digital testing program allowing stack viewing of volumetric data sets and changing views and window settings. Thoughts verbalized by the participants were registered and coded by a framework of knowledge and skills concerning three components: perception, analysis, and synthesis. The components were subdivided into 16 discrete knowledge and skill elements. A within-subject analysis was performed to compare cognitive processes during volumetric image readings versus 2D cross-sectional image readings.nnnRESULTSnMost utterances contained knowledge and skills concerning perception (46%). A smaller part involved synthesis (31%) and analysis (23%). More utterances regarded perception in volumetric image interpretation than in 2D image interpretation (Median 48% vs 35%; z = -3.9; P < .001). Synthesis was less prominent in volumetric than in 2D image interpretation (Median 28% vs 42%; z = -3.9; P < .001). No differences were found in analysis utterances.nnnCONCLUSIONSnCognitive processes in volumetric and 2D cross-sectional image interpretation differ substantially. Volumetric image interpretation draws predominantly on perceptual processes, whereas 2D image interpretation is mainly characterized by synthesis. The results encourage the use of volumetric images for teaching and testing perceptual skills.
Academic Radiology | 2015
Cécile J. Ravesloot; Anouk van der Gijp; Marieke van der Schaaf; Josephine C.B.M. Huige; Koen L. Vincken; Christian P. Mol; Ronald L. A. W. Bleys; Olle Tj ten Cate; Jan P.J. van Schaik
RATIONALE AND OBJECTIVESnRadiology practice has become increasingly based on volumetric images (VIs), but tests in medical education still mainly involve two-dimensional (2D) images. We created a novel, digital, VI test and hypothesized that scores on this test would better reflect radiological anatomy skills than scores on a traditional 2D image test. To evaluate external validity we correlated VI and 2D image test scores with anatomy cadaver-based test scores.nnnMATERIALS AND METHODSnIn 2012, 246 medical students completed one of two comparable versions (A and B) of a digital radiology test, each containing 20 2D image and 20 VI questions. Thirty-three of these participants also took a human cadaver anatomy test. Mean scores and reliabilities of the 2D image and VI subtests were compared and correlated with human cadaver anatomy test scores. Participants received a questionnaire about perceived representativeness and difficulty of the radiology test.nnnRESULTSnHuman cadaver test scores were not correlated with 2D image scores, but significantly correlated with VI scores (r = 0.44, P < .05). Cronbachs α reliability was 0.49 (A) and 0.65 (B) for the 2D image subtests and 0.65 (A) and 0.71 (B) for VI subtests. Mean VI scores (74.4%, standard deviation 2.9) were significantly lower than 2D image scores (83.8%, standard deviation 2.4) in version A (P < .001). VI questions were considered more representative of clinical practice and education than 2D image questions and less difficult (both P < .001).nnnCONCLUSIONSnVI tests show higher reliability, a significant correlation with human cadaver test scores, and are considered more representative for clinical practice than tests with 2D images.
Journal of Digital Imaging | 2016
Annemarie M. den Harder; Marissa Frijlingh; Cécile J. Ravesloot; Anne E. Oosterbaan; Anouk van der Gijp
With the development of cross-sectional imaging techniques and transformation to digital reading of radiological imaging, e-learning might be a promising tool in undergraduate radiology education. In this systematic review of the literature, we evaluate the emergence of image interaction possibilities in radiology e-learning programs and evidence for effects of radiology e-learning on learning outcomes and perspectives of medical students and teachers. A systematic search in PubMed, EMBASE, Cochrane, ERIC, and PsycInfo was performed. Articles were screened by two authors and included when they concerned the evaluation of radiological e-learning tools for undergraduate medical students. Nineteen articles were included. Seven studies evaluated e-learning programs with image interaction possibilities. Students perceived e-learning with image interaction possibilities to be a useful addition to learning with hard copy images and to be effective for learning 3D anatomy. Both e-learning programs with and without image interaction possibilities were found to improve radiological knowledge and skills. In general, students found e-learning programs easy to use, rated image quality high, and found the difficulty level of the courses appropriate. Furthermore, they felt that their knowledge and understanding of radiology improved by using e-learning. In conclusion, the addition of radiology e-learning in undergraduate medical education can improve radiological knowledge and image interpretation skills. Differences between the effect of e-learning with and without image interpretation possibilities on learning outcomes are unknown and should be subject to future research.
Computers in Human Behavior | 2016
Bobby G. Stuijfzand; Marieke van der Schaaf; Femke Kirschner; Cécile J. Ravesloot; Anouk van der Gijp; Koen L. Vincken
Medical image interpretation is moving from using 2D- to volumetric images, thereby changing the cognitive and perceptual processes involved. This is expected to affect medical students experienced cognitive load, while learning image interpretation skills. With two studies this explorative research investigated whether measures inherent to image interpretation, i.e. human-computer interaction and eye tracking, relate to cognitive load. Subsequently, it investigated effects of volumetric image interpretation on second-year medical students cognitive load. Study 1 measured human-computer interactions of participants during two volumetric image interpretation tasks. Using structural equation modelling, the latent variable volumetric image information was identified from the data, which significantly predicted self-reported mental effort as a measure of cognitive load. Study 2 measured participants eye movements during multiple 2D and volumetric image interpretation tasks. Multilevel analysis showed that time to locate a relevant structure in an image was significantly related to pupil dilation, as a proxy for cognitive load. It is discussed how combining human-computer interaction and eye tracking allows for comprehensive measurement of cognitive load. Combining such measures in a single model would allow for disentangling unique sources of cognitive load, leading to recommendations for implementation of volumetric image interpretation in the medical education curriculum. Display Omitted Image interpretation in medicine moved from 2D- to volumetric images.Cognitive load of students interpreting medical images affected.Human computer interaction and time to locate relevant area predict cognitive load.Insights useful for avoiding cognitive overload in medical curriculum.
Radiology | 2017
Cécile J. Ravesloot; Marieke van der Schaaf; Cas Kruitwagen; Anouk van der Gijp; D. R. Rutgers; Cees Haaring; Olle ten Cate; Jan P.J. van Schaik
Purpose To investigate knowledge and image interpretation skill development in residency by studying scores on knowledge and image questions on radiology tests, mediated by the training environment. Materials and Methods Ethical approval for the study was obtained from the ethical review board of the Netherlands Association for Medical Education. Longitudinal test data of 577 of 2884 radiology residents who took semiannual progress tests during 5 years were retrospectively analyzed by using a nonlinear mixed-effects model taking training length as input variable. Tests included nonimage and image questions that assessed knowledge and image interpretation skill. Hypothesized predictors were hospital type (academic or nonacademic), training hospital, enrollment age, sex, and test date. Results Scores showed a curvilinear growth during residency. Image scores increased faster during the first 3 years of residency and reached a higher maximum than knowledge scores (55.8% vs 45.1%). The slope of image score development versus knowledge question scores of 1st-year residents was 16.8% versus 12.4%, respectively. Training hospital environment appeared to be an important predictor in both knowledge and image interpretation skill development (maximum score difference between training hospitals was 23.2%; P < .001). Conclusion Expertise developed rapidly in the initial years of radiology residency and leveled off in the 3rd and 4th training year. The shape of the curve was mainly influenced by the specific training hospital.
Academic Radiology | 2017
Anouk van der Gijp; Emily M. Webb; David M. Naeger
Scholars have identified two distinct ways of thinking. This Dual Process Theory distinguishes a fast, nonanalytical way of thinking, called System 1, and a slow, analytical way of thinking, referred to as System 2. In radiology, we use both methods when interpreting and reporting images, and both should ideally be emphasized when educating our trainees. This review provides practical tips for improving radiology education, by enhancing System 1 and System 2 thinking among our trainees.
Academic Radiology | 2017
Anouk van der Gijp; Koen L. Vincken; Christy Boscardin; Emily M. Webb; Olle ten Cate; David M. Naeger
RATIONALE AND OBJECTIVESnRadiology expertise is dependent on the use of efficient search strategies. The aim of this study is to investigate the effect of teaching search strategies on trainees accuracy in detecting lung nodules at computed tomography.nnnMATERIALS AND METHODSnTwo search strategies, scanning and drilling, were tested with a randomized crossover design. Nineteen junior radiology residents were randomized into two groups. Both groups first completed a baseline lung nodule detection test allowing a free search strategy, followed by a test after scanning instruction and drilling instruction or vice versa. True positive (TP) and false positive (FP) scores and scroll behavior were registered. A mixed-design analysis of variance was applied to compare the three search conditions.nnnRESULTSnSearch strategy instruction had a significant effect on scroll behavior, F(1.3)u2009=u200954.2, Pu2009<u20090.001; TP score, F(2)u2009=u200916.1, Pu2009<u20090.001; and FP score, F(1.3)u2009=u200915.3, Pu2009<u20090.001. Scanning instruction resulted in significantly lower TP scores than drilling instruction (Mu2009=u200910.7, SDu2009=u20095.0 versus Mu2009=u200916.3, SDu2009=u20095.3), t(18)u2009=u20094.78, Pu2009<u20090.001; or free search (Mu2009=u200915.3, SDu2009=u20094.6), t(18)u2009=u20094.44, Pu2009<u20090.001. TP scores for drilling did not significantly differ from free search. FP scores for drilling (Mu2009=u20097.3, SDu2009=u20095.6) were significantly lower than for free search (Mu2009=u200912.5, SDu2009=u20097.8), t(18)u2009=u20094.86, Pu2009<u20090.001.nnnCONCLUSIONSnTeaching a drilling strategy is preferable to teaching a scanning strategy for finding lung nodules.
Advances in Health Sciences Education | 2018
Larissa den Boer; Marieke van der Schaaf; Koen L. Vincken; Chris P. Mol; Bobby G. Stuijfzand; Anouk van der Gijp
The interpretation of medical images is a primary task for radiologists. Besides two-dimensional (2D) images, current imaging technologies allow for volumetric display of medical images. Whereas current radiology practice increasingly uses volumetric images, the majority of studies on medical image interpretation is conducted on 2D images. The current study aimed to gain deeper insight into the volumetric image interpretation process by examining this process in twenty radiology trainees who all completed four volumetric image cases. Two types of data were obtained concerning scroll behaviors and think-aloud data. Types of scroll behavior concerned oscillations, half runs, full runs, image manipulations, and interruptions. Think-aloud data were coded by a framework of knowledge and skills in radiology including three cognitive processes: perception, analysis, and synthesis. Relating scroll behavior to cognitive processes showed that oscillations and half runs coincided more often with analysis and synthesis than full runs, whereas full runs coincided more often with perception than oscillations and half runs. Interruptions were characterized by synthesis and image manipulations by perception. In addition, we investigated relations between cognitive processes and found an overall bottom-up way of reasoning with dynamic interactions between cognitive processes, especially between perception and analysis. In sum, our results highlight the dynamic interactions between these processes and the grounding of cognitive processes in scroll behavior. It suggests, that the types of scroll behavior are relevant to describe how radiologists interact with and manipulate volumetric images.
Simulation in Healthcare | 2017
Anouk van der Gijp; Cécile J. Ravesloot; Corinne A. Tipker; Kim de Crom; Dik R. Rutgers; Marieke van der Schaaf; Irene C. van der Schaaf; Christian P. Mol; Koen L. Vincken; Olle ten Cate; Mario Maas; Jan P.J. van Schaik
Introduction Clinical reasoning in diagnostic imaging professions is a complex skill that requires processing of visual information and image manipulation skills. We developed a digital simulation-based test method to increase authenticity of image interpretation skill assessment. Methods A digital application, allowing volumetric image viewing and manipulation, was used for three test administrations of the national Dutch Radiology Progress Test for residents. This study describes the development and implementation process in three phases. To assess authenticity of the digital tests, perceived image quality and correspondence to clinical practice were evaluated and compared with previous paper-based tests (PTs). Quantitative and qualitative evaluation results were used to improve subsequent tests. Results Authenticity of the first digital test was not rated higher than the PTs. Test characteristics and environmental conditions, such as image manipulation options and ambient lighting, were optimized based on participants’ comments. After adjustments in the third digital test, participants favored the image quality and clinical correspondence of the digital image questions over paper-based image questions. Conclusions Digital simulations can increase authenticity of diagnostic radiology assessments compared with paper-based testing. However, authenticity does not necessarily increase with higher fidelity. It can be challenging to simulate the image interpretation task of clinical practice in a large-scale assessment setting, because of technological limitations. Optimizing image manipulation options, the level of ambient light, time limits, and question types can help improve authenticity of simulation-based radiology assessments.