J.J. Rethans
Maastricht University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by J.J. Rethans.
Medical Education | 2002
J.J. Rethans; John J. Norcini; M Barón‐Maldonado; David Blackmore; Brian Jolly; Tony LaDuca; S R Lew; Gayle G. Page; Southgate Lh
Objective This paper aims to describe current views of the relationship between competence and performance and to delineate some of the implications of the distinctions between the two areas for the purpose of assessing doctors in practice.
BMJ | 1991
J.J. Rethans; F Sturmans; C van der Vleuten; P Hobus
OBJECTIVE--To study the differences and the relation between what a doctor actually does in daily practice (performance) and what he or she is capable of doing (competence) by using national standards for general practice. DESIGN--General practitioners were consulted by four standardised (simulated) patients portraying four different cases during normal surgery hours. Later the doctors participated in a controlled practice test, for which they were asked to perform to the best of their ability. In the test they saw exactly the same standardised cases but in different patients. The patients reported on the consultations. SETTING--Province of Limburg, the Netherlands. SUBJECTS--442 general practitioners invited by a letter. 137 (31%) agreed to participate, of whom 36 were selected and visited. MAIN OUTCOME MEASURES--Number of actions taken during the consultations across complaints and for each category of complaint: the competence and performance total scores. Combination of scores with duration of consultations (efficiency-time score). Correlation between scores in the competence and performance part. RESULTS--Mean (SD) total score across complaints for competence was 49% higher than in the performance test (81.8 (11) compared with 54.7 (10.1), p less than 0.0001). The Pearson correlation across complaints between the competence total score and the performance total score of the participating physicians was -0.04 (not significant). When efficiency and consultation time of the consultations were taken into account, the correlation was 0.45 (p less than 0.01). CONCLUSIONS--Assessment of competence under examination circumstances can have predictive value for performance in actual practice only when factors such as efficiency and consultation time are taken into account. Below standard performance of physicians does not necessarily reflect a lack of competence. Performance and competence should be considered as distinct constructs.
Medical Education | 2002
Richard Hays; Brian Jolly; L.J.M. Caldon; Peter McCrorie; Pauline McAvoy; I. C. McManus; J.J. Rethans
Background Some doctors who perform poorly appear not to be aware of how their performance compares with accepted practice. The way that professionals maintain their existing expertise and acquire new knowledge and skills – that is, maintain their ‘currency’ of practice – requires a capacity to change. This capacity to change probably requires the individual doctor to possess insight into his or her performance as well as motivation to change. There may be a range of levels of insight in different individuals. At some point this reaches a level which is inadequate for effective self‐regulation. Insight and performance may be critically related and there are instances where increasing insight in the presence of decreasing performance can also cause difficulties.
Archive | 1997
Albert Scherpbier; C. P. M. van der Vleuten; J.J. Rethans; A. F. W. van der Steeg
Key-Note Addresses. Aims and Objectives. Community Based Programmes. Continuing Medical Education. Curriculum Development. Examining Examinations. Faculty Development. Family Medicine. Information Technology. Innovations in Assessment. OSCE. Postgraduate Training and Assessment. Problem Based Learning. Programme Evaluation. Reasoning and Learning. Selection. Setting Standards. Skills Training. Standardized Patients. Teaching and Learning. Authors Index. Subject Index.
Annals of the Rheumatic Diseases | 2002
Simone L. Gorter; D. van der Heijde; S van der Linden; Harry Houben; J.J. Rethans; Albert Scherpbier; C.P.M. van der Vleuten
Objectives: To assess, using standardised patients (SPs), how rheumatologists diagnose psoriatic arthritis, whether the diagnostic efficiency is influenced by specific characteristics of the rheumatologists, and to study the relationship with costs. Methods: Twenty three rheumatologists were each visited by one of two SPs (one male, one female) presenting as a patient with psoriatic arthritis. SPs remained incognito for all meetings for the duration of the study. Immediately after the encounter, SPs completed case-specific checklists on the medical content of the encounter. Information on ordered laboratory and imaging tests was obtained from each hospital. Results: Fourteen rheumatologists diagnosed psoriatic arthritis correctly. They inspected the skin for psoriatic lesions more often than those rheumatologists who established other diagnoses. Rheumatologists diagnosing psoriatic arthritis spent more on additional laboratory and imaging investigations. These were carried out after the diagnosis to confirm it and to record the extent and severity of the disease. No differences in type of practice, number of outpatients seen each week, working experience, or sex were found between rheumatologists who made the correct diagnosis and those who made other diagnoses. The correct diagnosis was more often missed by rheumatologists who saw the male SP, who presented with clear distal interphalangeal DIP joint arthritis only, causing confusion with osteoarthritis of the DIP joints. Conclusion: There is a considerable amount of variation in the delivery of care among rheumatologists who see an SP with psoriatic arthritis. Rheumatologists focusing too much on the most prominent features (DIP joint arthritis) sometimes seem to forget “the hidden (skin) symptoms”.
Medical Education | 1996
J. J. M. Jansen; Albert Scherpbier; J. C. M. Metz; Richard Grol; C.P.M. van der Vleuten; J.J. Rethans
The use of performance‐based assessment has been extended to postgraduate education and practising doctors, despite criticism of validity. While differences in expertise at this level are easily reflected in scores on a written test, these differences are relatively small on performance‐based tests. However, scores on written tests and performance‐based tests of clinical competence generally show moderate correlations. A study was designed to evaluate construct validity of a performance‐based test for technical clinical skills in continuing medical education for general practitioners, and to explore the correlation between performance and knowledge of specific skills. A 1‐day skills training was given to 71 general practitioners, covering four different technical clinical skills. The effect of the training on performance was measured with a performance‐based test using a randomized controlled trial design, while the effect on knowledge was measured with a written test administered 1 month before and directly after the training. A training effect could be shown by the performance‐based test for all four clinical skills. The written test also demonstrated a training effect for all but one skill. However, correlations between scores on the written test and on the performance based test were low for all skills. It is concluded that construct validity of a performance‐based test for technical clinical skills of general practitioners was demonstrated, while the knowledge test score was shown to be a poor predictor of competence for specific technical skills.
Medical Education | 2000
J. J. M. Jansen; Richard Grol; C.P.M. van der Vleuten; Albert Scherpbier; Harry F.J.M. Crebolder; J.J. Rethans
Evaluation of the efficacy of a short course of technical clinical skills to change performance in general practice.
Teaching and Learning in Medicine | 1998
J.A. Jansen; R.P.T.M. Grol; Harry F.J.M. Crebolder; J.J. Rethans; C.J.M. van der Vleuten
Background: Self-directed learning requires accurate self-assessment, but research evidence shows poor validity of self-assessment. Training in self-assessment may improve validity. Purpose: To investigate if repeated personal feedback based on objective knowledge and skill scores enhances self-assessment skills of practicing general practitioners. Method: Participants were general practitioners (n = 60), who received skills training covering 4 clinical skills at3 months (Group A)or 6 months (Group B)after enrollment in the study. Participants were tested at 3-month intervals with a knowledge test (60 items), a performance-based test (4 stations), and a self-assessment questionnaire (22 items), covering the four different clinical skills. They received personal feedback on the results. Results: At 3 months, mean scores on the self-assessment questionnaire and knowledge test had increased significantly more in Group A compared to Group B, but at 6 months no differences in mean scores remained. Correlations...
Tijdschrift Voor Medisch Onderwijs | 2000
S. L. Gorter; J.J. Rethans; Albert Scherpbier
Patientencontacten motiveren studenten en leveren een bijdrage aan de vorming van betere kennisnetwerken, die nodig zijn bij het omgaan met problemen van patienten. Patientendemonstraties en bedside teaching zijn al langer toegepaste onderwijsvormen. Van recenter datum zijn de simulatiepatienten, die als patient, docent, examinator en onderzoeker ingezet kunnen worden. Ook echte patienten kunnen in verschillende rollen en onderwijsvormen een bijdrage leveren aan het curriculum. De geschiedenis en huidige stand van zaken van de rol van (simulatie)patienten in het onderwijs passeren in dit artikel de revue. Tot slot wordt betoogd dat een grotere rol van (simulatie)patienten op een vroeger tijdstip in de opleiding de overgang van theorie naar praktijk vergemakkelijkt en het leren van de student ten goede komt.
Archive | 1997
Paul Ram; J.J. Rethans; R.P.T.M. Grol; C.P.M. van der Vleuten
Objectives: assessment of practising general practitioners is needed for both formative and summative purposes. The assessment procedure will have to include multiple measurements. The Comprehensive Assessment Project (CAP) study elaborates a model of comprehensive assessment, making a distinction between whether any method actually covers performance or competence. In this paper the first psychometric results (validity, reliability and feasibility) of different implemented methods of assessment and their correlations will be reported. Conclusion: written knowledge tests, video-observation of performance in surgeries in the daily practice and visitation focussed on practice management should be taken into account in comprehensive assessment of practising GPs.