Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jan P.J. van Schaik is active.

Publication


Featured researches published by Jan P.J. van Schaik.


CardioVascular and Interventional Radiology | 1996

Intraarterial pressure gradients after randomized angioplasty or stenting of iliac artery lesions

Eric Tetteroo; Cees Haaring; Yolanda van der Graaf; Jan P.J. van Schaik; A. D. van Engelen; Willem P. Th. M. Mali

PurposeTo determine initial technical results of percutaneous transluminal angioplasty (PTA) and stent procedures in the iliac artery, mean intraarterial pressure gradients were recorded before and after each procedure.MethodsWe randomly assigned 213 patients with typical intermittent claudication to primary stent placement (n=107) or primary PTA (n=106), with subsequent stenting in the case of a residual mean pressure gradient of >10 mmHg (n=45). Eligibility criteria included angiographic iliac artery stenosis (>50% diameter reduction) and/or a peak systolic velocity ratio >2.5 on duplex examination. Mean intraarterial pressures were simultaneously recorded above and below the lesion, at rest and also durign vasodilatation in the case of a resting gradient ≤10 mmHg.ResultsPressure gradients in the primary stent group were 14.9±10.4 mmHg before and 2.9±3.5 mmHg after stenting. Pressure gradients in the primary PTA group were 17.3±11.3 mmHg pre-PTA, 4.2±5.4 mmHg post-PTA, and 2.5±2.8 mmHg after selective stenting. Compared with primary stent placement, PTA plus selective stent placement avoided application of a stent in 63% (86/137) of cases, resulting in a considerable cost saving.ConclusionTechnical results of primary stenting and PTA plus selective stenting are similar in terms of residual pressure gradients.


Pediatric Radiology | 1997

Transvenous embolisation of an arteriovenous malformation of the mandible via a femoral approach

Frederik J. A. Beek; Frans W. ten Broek; Jan P.J. van Schaik; Willem P. Th. M. Mali

Abstract Arteriovenous malformations (AVM) of the mandible are uncommon but can give rise to sudden massive haemorrhage. Transarterial or direct transosseous embolisation can be used to treat this condition but is not always effective. We describe a case of mandibular AVM with a single draining vein which was embolised successfully via a femoral transvenous approach.


Medical Teacher | 2012

Construct validation of progress testing to measure knowledge and visual skills in radiology

Cécile J. Ravesloot; Marieke van der Schaaf; Cees Haaring; Cas Kruitwagen; Erik Beek; Olle ten Cate; Jan P.J. van Schaik

Background: The Dutch Radiology Progress Test (DRPT) monitors the acquisition of knowledge and visual skills of radiology residents in the Netherlands. Aim: We aimed to evaluate the quality of progress testing in postgraduate radiology training by studying the reliability of the DRPT and finding an indication for its construct validity. We expected that knowledge would increase rapidly in the first years of residency, leveling-off in later years, to allow for the development of visual skills. We hypothesized that scores on the DRPT reflect this pattern. Methods: Internal consistencies were estimated with Cronbachs alpha. Performance increase over program years were tested using one-way analysis of variance. Results: Data were available for 498 residents (2281 test results). Reliabilities were around Cronbachs alpha 0.90. There was a significant difference in the mean test results between the first three years of residency. After the fourth year no significant increase in test scores on knowledge could be measured on eight tests. The same pattern occurred for scores on visual skills. However, visual skills scores tend to increase more sharply than knowledge scores. Conclusion: We found support for the reliability and construct validity of the DRPT. However, assessment on visual skill development needs further exploration.


Academic Radiology | 2015

Volumetric and two-dimensional image interpretation show different cognitive processes in learners

Anouk van der Gijp; Cécile J. Ravesloot; Marieke van der Schaaf; Irene C. van der Schaaf; Josephine C.B.M. Huige; Koen L. Vincken; Olle ten Cate; Jan P.J. van Schaik

RATIONALE AND OBJECTIVES In current practice, radiologists interpret digital images, including a substantial amount of volumetric images. We hypothesized that interpretation of a stack of a volumetric data set demands different skills than interpretation of two-dimensional (2D) cross-sectional images. This study aimed to investigate and compare knowledge and skills used for interpretation of volumetric versus 2D images. MATERIALS AND METHODS Twenty radiology clerks were asked to think out loud while reading four or five volumetric computed tomography (CT) images in stack mode and four or five 2D CT images. Cases were presented in a digital testing program allowing stack viewing of volumetric data sets and changing views and window settings. Thoughts verbalized by the participants were registered and coded by a framework of knowledge and skills concerning three components: perception, analysis, and synthesis. The components were subdivided into 16 discrete knowledge and skill elements. A within-subject analysis was performed to compare cognitive processes during volumetric image readings versus 2D cross-sectional image readings. RESULTS Most utterances contained knowledge and skills concerning perception (46%). A smaller part involved synthesis (31%) and analysis (23%). More utterances regarded perception in volumetric image interpretation than in 2D image interpretation (Median 48% vs 35%; z = -3.9; P < .001). Synthesis was less prominent in volumetric than in 2D image interpretation (Median 28% vs 42%; z = -3.9; P < .001). No differences were found in analysis utterances. CONCLUSIONS Cognitive processes in volumetric and 2D cross-sectional image interpretation differ substantially. Volumetric image interpretation draws predominantly on perceptual processes, whereas 2D image interpretation is mainly characterized by synthesis. The results encourage the use of volumetric images for teaching and testing perceptual skills.


Academic Radiology | 2015

Support for external validity of radiological anatomy tests using volumetric images.

Cécile J. Ravesloot; Anouk van der Gijp; Marieke van der Schaaf; Josephine C.B.M. Huige; Koen L. Vincken; Christian P. Mol; Ronald L. A. W. Bleys; Olle Tj ten Cate; Jan P.J. van Schaik

RATIONALE AND OBJECTIVES Radiology practice has become increasingly based on volumetric images (VIs), but tests in medical education still mainly involve two-dimensional (2D) images. We created a novel, digital, VI test and hypothesized that scores on this test would better reflect radiological anatomy skills than scores on a traditional 2D image test. To evaluate external validity we correlated VI and 2D image test scores with anatomy cadaver-based test scores. MATERIALS AND METHODS In 2012, 246 medical students completed one of two comparable versions (A and B) of a digital radiology test, each containing 20 2D image and 20 VI questions. Thirty-three of these participants also took a human cadaver anatomy test. Mean scores and reliabilities of the 2D image and VI subtests were compared and correlated with human cadaver anatomy test scores. Participants received a questionnaire about perceived representativeness and difficulty of the radiology test. RESULTS Human cadaver test scores were not correlated with 2D image scores, but significantly correlated with VI scores (r = 0.44, P < .05). Cronbachs α reliability was 0.49 (A) and 0.65 (B) for the 2D image subtests and 0.65 (A) and 0.71 (B) for VI subtests. Mean VI scores (74.4%, standard deviation 2.9) were significantly lower than 2D image scores (83.8%, standard deviation 2.4) in version A (P < .001). VI questions were considered more representative of clinical practice and education than 2D image questions and less difficult (both P < .001). CONCLUSIONS VI tests show higher reliability, a significant correlation with human cadaver test scores, and are considered more representative for clinical practice than tests with 2D images.


Radiology | 2017

Predictors of Knowledge and Image Interpretation Skill Development in Radiology Residents

Cécile J. Ravesloot; Marieke van der Schaaf; Cas Kruitwagen; Anouk van der Gijp; D. R. Rutgers; Cees Haaring; Olle ten Cate; Jan P.J. van Schaik

Purpose To investigate knowledge and image interpretation skill development in residency by studying scores on knowledge and image questions on radiology tests, mediated by the training environment. Materials and Methods Ethical approval for the study was obtained from the ethical review board of the Netherlands Association for Medical Education. Longitudinal test data of 577 of 2884 radiology residents who took semiannual progress tests during 5 years were retrospectively analyzed by using a nonlinear mixed-effects model taking training length as input variable. Tests included nonimage and image questions that assessed knowledge and image interpretation skill. Hypothesized predictors were hospital type (academic or nonacademic), training hospital, enrollment age, sex, and test date. Results Scores showed a curvilinear growth during residency. Image scores increased faster during the first 3 years of residency and reached a higher maximum than knowledge scores (55.8% vs 45.1%). The slope of image score development versus knowledge question scores of 1st-year residents was 16.8% versus 12.4%, respectively. Training hospital environment appeared to be an important predictor in both knowledge and image interpretation skill development (maximum score difference between training hospitals was 23.2%; P < .001). Conclusion Expertise developed rapidly in the initial years of radiology residency and leveled off in the 3rd and 4th training year. The shape of the curve was mainly influenced by the specific training hospital.


Simulation in Healthcare | 2017

Increasing Authenticity of Simulation-Based Assessment in Diagnostic Radiology

Anouk van der Gijp; Cécile J. Ravesloot; Corinne A. Tipker; Kim de Crom; Dik R. Rutgers; Marieke van der Schaaf; Irene C. van der Schaaf; Christian P. Mol; Koen L. Vincken; Olle ten Cate; Mario Maas; Jan P.J. van Schaik

Introduction Clinical reasoning in diagnostic imaging professions is a complex skill that requires processing of visual information and image manipulation skills. We developed a digital simulation-based test method to increase authenticity of image interpretation skill assessment. Methods A digital application, allowing volumetric image viewing and manipulation, was used for three test administrations of the national Dutch Radiology Progress Test for residents. This study describes the development and implementation process in three phases. To assess authenticity of the digital tests, perceived image quality and correspondence to clinical practice were evaluated and compared with previous paper-based tests (PTs). Quantitative and qualitative evaluation results were used to improve subsequent tests. Results Authenticity of the first digital test was not rated higher than the PTs. Test characteristics and environmental conditions, such as image manipulation options and ambient lighting, were optimized based on participants’ comments. After adjustments in the third digital test, participants favored the image quality and clinical correspondence of the digital image questions over paper-based image questions. Conclusions Digital simulations can increase authenticity of diagnostic radiology assessments compared with paper-based testing. However, authenticity does not necessarily increase with higher fidelity. It can be challenging to simulate the image interpretation task of clinical practice in a large-scale assessment setting, because of technological limitations. Optimizing image manipulation options, the level of ambient light, time limits, and question types can help improve authenticity of simulation-based radiology assessments.


Diagnosis | 2017

Identifying error types in visual diagnostic skill assessment

Cécile J. Ravesloot; Anouk van der Gijp; Marieke van der Schaaf; Josephine C.B.M. Huige; Olle ten Cate; Koen L. Vincken; Christian P. Mol; Jan P.J. van Schaik

Abstract Background: Misinterpretation of medical images is an important source of diagnostic error. Errors can occur in different phases of the diagnostic process. Insight in the error types made by learners is crucial for training and giving effective feedback. Most diagnostic skill tests however penalize diagnostic mistakes without an eye for the diagnostic process and the type of error. A radiology test with stepwise reasoning questions was used to distinguish error types in the visual diagnostic process. We evaluated the additional value of a stepwise question-format, in comparison with only diagnostic questions in radiology tests. Methods: Medical students in a radiology elective (n=109) took a radiology test including 11–13 cases in stepwise question-format: marking an abnormality, describing the abnormality and giving a diagnosis. Errors were coded by two independent researchers as perception, analysis, diagnosis, or undefined. Erroneous cases were further evaluated for the presence of latent errors or partial knowledge. Inter-rater reliabilities and percentages of cases with latent errors and partial knowledge were calculated. Results: The stepwise question-format procedure applied to 1351 cases completed by 109 medical students revealed 828 errors. Mean inter-rater reliability of error type coding was Cohen’s κ=0.79. Six hundred and fifty errors (79%) could be coded as perception, analysis or diagnosis errors. The stepwise question-format revealed latent errors in 9% and partial knowledge in 18% of cases. Conclusions: A stepwise question-format can reliably distinguish error types in the visual diagnostic process, and reveals latent errors and partial knowledge.


Communications in computer and information science | 2014

Practical Implementation of Innovative Image Testing

Corinne Tipker-Vos; Kim de Crom; Anouk van der Gijp; Cécile J. Ravesloot; M.F. van der Schaaf; Christian P. Mol; Mario Maas; Jan P.J. van Schaik; Koen L. Vincken

The testing of image interpretation skills within the profession of Radiology (often paper- pencil) lags behind practice. To increase the authenticity of assessment of image interpretation skills, the Dutch national progress test for medical specialists in training to become radiologists, is digitized using the program VQuest. This programme makes it possible to administer a test with 2D and 3D images, in which images can be viewed and processed as they can in practice. During implementation, the entire assessment cycle from test design to assessment analysis and evaluation has been run through twice. Excluding some small improvements, both trainee specialist and organizational members were satisfied with the digitized assessment. Amongst other things, the trainee specialist feel that this application of digital testing is more consistent with the situation in practice than the conventional testing method.


European Journal of Radiology | 2015

Volumetric CT-images improve testing of radiological image interpretation skills

Cécile J. Ravesloot; Marieke van der Schaaf; Jan P.J. van Schaik; Olle ten Cate; Anouk van der Gijp; Christian P. Mol; Koen L. Vincken

Collaboration


Dive into the Jan P.J. van Schaik's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kim de Crom

Academic Medical Center

View shared research outputs
Researchain Logo
Decentralizing Knowledge