Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Adrian Freeman is active.

Publication


Featured researches published by Adrian Freeman.


Medical Teacher | 2010

Progress testing internationally

Adrian Freeman; Cees van der Vleuten; Zineb Miriam Nouns; Chris Ricketts

curriculum) are given the same written test. The test is comprehensive by sampling all relevant disciplines in a curriculum usually determined by a fixed blueprint. Because of the need for wide sampling, items are typically of the multiple-choice type. The test is repeated regularly in time. The same blueprint is used but test items are usually different with every test occasion. So, a comprehensive (across many content areas) cross-sectional (comparing performance of different groups of ability) and longitudinal (comparing performance in time) picture is obtained of the knowledge of learners in relation to the end objectives of a curriculum. This is the first time that so many contributions on progress testing are being published together. Although progress testing was initially developed in the 1970s by two institutions independently, it has taken quite a long time for other schools to adopt this very special testing procedure. The reason is probably twofold. First and foremost, the utility of the testing procedure is not easily understood. The concept of testing is so different from our usual courserelated testing that it takes time to really understand the ideas behind it and see the potential benefits that may result from progress testing. The second reason is that progress testing can be logistically burdensome. It requires considerable effort for test development, test administration and test scoring. The resources and the centralized governance required are probably major obstacles for many institutions to engage in progress testing. Nevertheless, in the recent years, an increasing number of medical schools and other institutions have gained interest in progress testing and are using the method. In order to exchange information and experiences, a symposium on progress testing was held at International Association of Medical Education (AMEE) 2009. Medical Teacher offered space for subsequent publishing of papers which you will find in this issue. By the way of


Medical Teacher | 2010

Beyond assessment: Feedback for individuals and institutions based on the progress test

Lee Coombes; Chris Ricketts; Adrian Freeman; Jeff Stratford

Background: Progress testing is used at Peninsula Medical School to test applied medical knowledge four times a year using a 125-item multiple choice test. Items within each test are classified and matched to the curriculum blueprint. Aim: To examine the use of item classifications as part of a quality assurance process and to examine the range of available feedback provided after each test or group of tests. Methods: The questions were classified using a single best classification method. These were placed into a simplified version of the progress test assessment blueprint. Average item facilities for individuals and cohorts were used to provide feedback to individual students and curriculum designers. Results: The analysis shows that feedback can be provided at a number of levels, and inferences about various groups can be made. It demonstrates that learning mostly occurs in the early years of the course, but when examined longitudinally, it shows how different patterns of learning exist in different curriculum areas. It also shows that the effect of changes in the curriculum may be monitored through these data. Conclusions: Used appropriately, progress testing can provide a wide range of feedback to every individual or group of individuals in a medical school.


Medical Teacher | 2010

Choosing and designing knowledge assessments: Experience at a new medical school

Adrian Freeman; Chris Ricketts

Background: Curriculum developers have a wide choice of assessment methods in all aspects of medical education including the specific area of medical knowledge. When selecting the appropriate tool, there is an increasing literature to provide a robust evidence base for developments or decisions. Aim: As a new medical school, we wished to select the most appropriate method for knowledge assessment. Methods: This article describes how a new medical school came to choose progress testing as its only method of summative assessment of undergraduate medical knowledge. Results: The rationale, implementation, development and performance of the assessment are described. The position after the first cohort of students qualified is evaluated. Conclusion: Progress testing has worked well in a new school. Opportunities for further study and development exist. It is to be hoped that our experiences and evidence will assist and inform others as they consider developments for their own schools.


Medical Education | 2009

Standard setting for progress tests: combining external and internal standards

Chris Ricketts; Adrian Freeman; Lee Coombes

Objectives  There has been little work on standard setting for progress tests and it is common practice to use normative standards. This study aimed to develop a new approach to standard setting for progress tests administered at the point when students approach graduation.


Medical Teacher | 2010

Adaptation of medical progress testing to a dental setting

J.H. Bennett; Adrian Freeman; Lee Coombes; Liz Kay; Chris Ricketts

Although progress testing (PT) is well established in several medical schools, it is new to dentistry. Peninsula College of Medicine and Dentistry has recently established a Bachelor of Dental Surgery programme and has been one of the first schools to use PT in a dental setting. Issues associated with its development and of its adaption to the specific needs of the dental curriculum are considered.


Medical Teacher | 2010

Can we share questions? Performance of questions from different question banks in a single medical school

Adrian Freeman; Anthony Nicholls; Chris Ricketts; Lee Coombes

Background: To use progress testing, a large bank of questions is required, particularly when planning to deliver tests over a long period of time. The questions need not only to be of good quality but also balanced in subject coverage across the curriculum to allow appropriate sampling. Hence as well as creating its own questions, an institution could share questions. Both methods allow ownership and structuring of the test appropriate to the educational requirements of the institution. Method: Peninsula Medical School (PMS) has developed a mechanism to validate questions written in house. That mechanism can be adapted to utilise questions from an International question bank International Digital Electronic Access Library (IDEAL) and another UK-based question bank Universities Medical Assessment Partnership (UMAP). These questions have been used in our progress tests and analysed for relative performance. Results: Data are presented to show that questions from differing sources can have comparable performance in a progress testing format. Conclusion: There are difficulties in transferring questions from one institution to another. These include problems of curricula and cultural differences. Whilst many of these difficulties exist, our experience suggests that it only requires a relatively small amount of work to adapt questions from external question banks for effective use. The longitudinal aspect of progress testing (albeit summatively) may allow more flexibility in question usage than single high stakes exams.


Medical Teacher | 2010

Difficult decisions for progress testing: how much and how often?

Chris Ricketts; Adrian Freeman; Giovanni Pagliuca; Lee Coombes; Julian Archer

This article is primarily an opinion piece which aims to encourage debate and future research. There is little theoretical or practical research on how best to design progress tests. We propose that progress test designers should be clear about the primary purpose of their assessment. We provide some empirical evidence about reliability and cost based upon generalisability theory. We suggest that the future research is needed in the areas of educational impact and acceptability.


British Journal of General Practice | 2015

Licensing exams and judicial review: the closing of one door and opening of others?

Sue Rendel; Pauline Foreman; Adrian Freeman

The Royal College of General Practitioners (RCGP) has the responsibility to provide a curriculum and suitable assessments to license doctors to work as GP specialists in the UK. The General Medical Council (GMC), as the Regulator, holds the RCGP to account for the delivery of these functions. As with all health care, the workload of a GP has become more complex. They are responsible for providing primary care to an ageing population with multimorbidity. Increasingly more of that care is delivered within the community rather than in hospitals. Licensed GPs need to have the knowledge and skills to feel capable of this work and patients have a right to safe and effective care. The MRCGP examination seeks to establish the readiness of candidates to look after patients in unsupervised practice. A recent study has demonstrated the relationship between scores on licensing examinations and patient health outcomes.1 The GP specialty training programme is only 3 years in duration. The MRCGP examination, which must be passed to obtain a certificate of completion of training (CCT), has three components: the applied knowledge test (AKT) attempted from Year 2, the clinical skills assessment (CSA) attempted in Year 3, and workplace based assessment which runs throughout the entire 3-year programme. The CSA is an assessment of a doctor’s ability to integrate and apply clinical, professional, communication and practical skills appropriate for general practice. It is an objective structured clinical examination (OSCE) style examination of 13 stations. Using professional role players, the exam assesses candidates’ clinical skills in standardised …


Medical Teacher | 2006

Innovative learning: employing medical students to write formative assessments.

Suzanne Chamberlain; Adrian Freeman; James Oldham; D. L. Sanders; Nicky Hudson; Chris Ricketts

Peninsula Medical School, UK, employed six students to write MCQ items for a formative applied medical knowledge item bank. The students successfully generated 260 quality MCQs in their six-week contracted period. Informal feedback from students and two staff mentors suggests that the exercise provided a very effective learning environment and that students felt they were ‘being paid to learn’. Further research is under way to track the progress of the students involved in the exercise, and to formally evaluate the impact on learning.


BMC Medical Education | 2018

Cut-scores revisited: feasibility of a new method for group standard setting

Boaz Shulruf; Lee Coombes; Arvin Damodaran; Adrian Freeman; P. D. Jones; Steve Lieberman; Phillippa Poole; Joel Rhee; Tim Wilkinson; Peter Harris

BackgroundStandard setting is one of the most contentious topics in educational measurement. Commonly-used methods all have well reported limitations. To date, there is not conclusive evidence suggesting which standard setting method yields the highest validity.MethodsThe method described and piloted in this study asked expert judges to estimate the scores on a real MCQ examination that they consider indicated a clear pass, clear fail, and pass mark for the examination as a whole. The mean and SD of the judges responses to these estimates, Z scores and confidence intervals were used to derive the cut-score and the confidence in it.ResultsIn this example the new method’s cut-score was higher than the judges’ estimate. The method also yielded estimates of statistical error which determine the range of the acceptable cut-score and the estimated level of confidence one may have in the accuracy of that cut-score.ConclusionsThis new standard-setting method offers some advances, and possibly advantages, in that the decisions being asked of judges are based on firmer constructs, and it takes into account variation among judges.

Collaboration


Dive into the Adrian Freeman's collaboration.

Top Co-Authors

Avatar

Chris Ricketts

Peninsula College of Medicine and Dentistry

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anthony Nicholls

Peninsula College of Medicine and Dentistry

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Giovanni Pagliuca

Peninsula College of Medicine and Dentistry

View shared research outputs
Top Co-Authors

Avatar

J.H. Bennett

University College London

View shared research outputs
Top Co-Authors

Avatar

Jeff Stratford

Peninsula College of Medicine and Dentistry

View shared research outputs
Researchain Logo
Decentralizing Knowledge