Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Richard Hays is active.

Publication


Featured researches published by Richard Hays.


Medical Teacher | 2011

CRITERIA FOR GOOD ASSESSMENT: CONSENSUS STATEMENT AND RECOMMENDATIONS FROM THE OTTAWA 2010 CONFERENCE

John J. Norcini; Brownell Anderson; Valdes Roberto Bollela; Vanessa Burch; Manuel João Costa; Robbert Duvivier; Robert Galbraith; Richard Hays; Athol Kent; Vanessa Perrott; Trudie Roberts

In this article, we outline criteria for good assessment that include: (1) validity or coherence, (2) reproducibility or consistency, (3) equivalence, (4) feasibility, (5) educational effect, (6) catalytic effect, and (7) acceptability. Many of the criteria have been described before and we continue to support their importance here. However, we place particular emphasis on the catalytic effect of the assessment, which is whether the assessment provides results and feedback in a fashion that creates, enhances, and supports education. These criteria do not apply equally well to all situations. Consequently, we discuss how the purpose of the test (summative versus formative) and the perspectives of stakeholders (examinees, patients, teachers-educational institutions, healthcare system, and regulators) influence the importance of the criteria. Finally, we offer a series of practice points as well as next steps that should be taken with the criteria. Specifically, we recommend that the criteria be expanded or modified to take account of: (1) the perspectives of patients and the public, (2) the intimate relationship between assessment, feedback, and continued learning, (3) systems of assessment, and (4) accreditation systems.


Medical Education | 2000

A review of the evaluation of clinical teaching: new perspectives and challenges *

Linda Snell; Susan Tallett; Steven A. Haist; Richard Hays; John J. Norcini; Katinka J.A.H. Prince; Arthur I. Rothman; Richard Rowe

This article discusses the importance of the process of evaluation of clinical teaching for the individual teacher and for the programme. Measurement principles, including validity, reliability, efficiency and feasibility, and methods to evaluate clinical teaching are reviewed.


Medical Education | 2002

Is insight important? Measuring capacity to change performance

Richard Hays; Brian Jolly; L.J.M. Caldon; Peter McCrorie; Pauline McAvoy; I. C. McManus; J.J. Rethans

Background  Some doctors who perform poorly appear not to be aware of how their performance compares with accepted practice. The way that professionals maintain their existing expertise and acquire new knowledge and skills – that is, maintain their ‘currency’ of practice – requires a capacity to change. This capacity to change probably requires the individual doctor to possess insight into his or her performance as well as motivation to change. There may be a range of levels of insight in different individuals. At some point this reaches a level which is inadequate for effective self‐regulation. Insight and performance may be critically related and there are instances where increasing insight in the presence of decreasing performance can also cause difficulties.


Medical Education | 2002

Selecting performance assessment methods for experienced physicians

Richard Hays; Helena Davies; Jonathan Beard; L.J.M. Caldon; Elizabeth Farmer; P.M. Finucane; Peter McCrorie; David Newble; Lambert Schuwirth; G.R. Sibbald

Background  While much is now known about how to assess the competence of medical practitioners in a controlled environment, less is known about how to measure the performance in practice of experienced doctors working in their own environments. The performance of doctors depends increasingly on how well they function in teams and how well the health care system around them functions.


Medical Education | 2000

The accountability of clinical education: its definition and assessment.

Elizabeth Murray; Larry D. Gruppen; Pamela Catton; Richard Hays; James O. Woolliscroft

Medical education is not exempt from increasing societal expectations of accountability. Competition for financial resources requires medical educators to demonstrate cost‐effective educational practice; health care practitioners, the products of medical education programmes, must meet increasing standards of professionalism; the culture of evidence‐based medicine demands an evaluation of the effect educational programmes have on health care and service delivery. Educators cannot demonstrate that graduates possess the required attributes, or that their programmes have the desired impact on health care without appropriate assessment tools and measures of outcome.


Medical Education | 2001

Setting performance standards for medical practice: a theoretical framework

L. Southgate; Richard Hays; J. Norcini; H. Mulholland; B. Ayers; J. Woolliscroft; M. Cusimano; P. McAvoy; M. Ainsworth; S. Haist; M. Campbell

The assessment of performance in the real world of medical practice is now widely accepted as the goal of assessment at the postgraduate level. This is largely a validity issue, as it is recognised that tests of knowledge and in clinical simulations cannot on their own really measure how medical practitioners function in the broader health care system. However, the development of standards for performance‐based assessment is not as well understood as in competency assessment, where simulations can more readily reflect narrower issues of knowledge and skills. This paper proposes a theoretical framework for the development of standards that reflect the more complex world in which experienced medical practitioners work.


Medical Education | 2001

Country report: Australia

David Prideaux; Nicholas Saunders; Kathryn Schofield; Lindon M.H. Wing; Jill Gordon; Richard Hays; Paul Worley; Anne Martin; Neil Paget

The last 10 years has been an interesting time for Australian medical education despite reduced funding.


Medical Teacher | 2016

Evidence regarding the utility of multiple mini-interview (MMI) for selection to undergraduate health programs: A BEME systematic review: BEME Guide No. 37

Eliot Rees; Ashley W. Hawarden; Gordon Dent; Richard Hays; Joanna Bates; Andrew B. Hassell

Abstract Background: In the 11 years since its development at McMaster University Medical School, the multiple mini-interview (MMI) has become a popular selection tool. We aimed to systematically explore, analyze and synthesize the evidence regarding MMIs for selection to undergraduate health programs. Methods: The review protocol was peer-reviewed and prospectively registered with the Best Evidence Medical Education (BEME) collaboration. Thirteen databases were searched through 34 terms and their Boolean combinations. Seven key journals were hand-searched since 2004. The reference sections of all included studies were screened. Studies meeting the inclusion criteria were coded independently by two reviewers using a modified BEME coding sheet. Extracted data were synthesized through narrative synthesis. Results: A total of 4338 citations were identified and screened, resulting in 41 papers that met inclusion criteria. Thirty-two studies report data for selection to medicine, six for dentistry, three for veterinary medicine, one for pharmacy, one for nursing, one for rehabilitation, and one for health science. Five studies investigated selection to more than one profession. MMIs used for selection to undergraduate health programs appear to have reasonable feasibility, acceptability, validity, and reliability. Reliability is optimized by including 7–12 stations, each with one examiner. The evidence is stronger for face validity, with more research needed to explore content validity and predictive validity. In published studies, MMIs do not appear biased against applicants on the basis of age, gender, or socio-economic status. However, applicants of certain ethnic and social backgrounds did less well in a very small number of published studies. Performance on MMIs does not correlate strongly with other measures of noncognitive attributes, such as personality inventories and measures of emotional intelligence. Discussion: MMI does not automatically mean a more reliable selection process but it can do, if carefully designed. Effective MMIs require careful identification of the noncognitive attributes sought by the program and institution. Attention needs to be given to the number of stations, the blueprint and examiner training. Conclusion: More work is required on MMIs as they may disadvantage groups of certain ethnic or social backgrounds. There is a compelling argument for multi-institutional studies to investigate areas such as the relationship of MMI content to curriculum domains, graduate outcomes, and social missions; relationships of applicants’ performance on different MMIs; bias in selecting applicants of minority groups; and the long-term outcomes appropriate for studies of predictive validity.


Medical Education | 1998

In‐training assessment in postgraduate training for general practice

Richard Hays; Rod Wellard

Assessment within general practice training curricula is necessary to both guide learning and to make certification decisions about competence to practise without supervision in the community, but there is a risk that the two roles could become confused. This paper proposes a conceptual framework that explains the relationship between formative assessment, in‐training assessment and end‐point assessment, as adopted by the Royal Australian College of General Practitioners Training Programme. The literature is reviewed to suggest assessment formats that could provide a means of making decisions about progress through training without harming the important role of providing feedback to guide learners.


Teaching and Learning in Medicine | 1990

Self‐evaluation of videotaped consultations

Richard Hays

Videotaped consultations are now widely used as a means of providing feedback to medical graduates and undergraduates. Their aim is to increase self‐awareness of performance and, thereby, enhance motivation for improvement. Central to the success of this educational intervention is the ability of learners to self‐evaluate their own performance. In this study, self‐evaluation scores of trainee postgraduate general practitioners were recorded during two video debriefing sessions at the beginning and end of a 3‐month supervised general‐practice attachment. Self‐evaluation was demonstrated to be influenced by self‐observation and receipt of feedback, indicating that self‐awareness was increased. A major benefit of experiencing video review based on self‐evaluation may be that it provides training in self‐evaluation. The ability to realistically self‐evaluate may facilitate self‐directed learning and be an important factor contributing toward maintaining competence throughout a demanding career.

Collaboration


Dive into the Richard Hays's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lisa Crossland

University of Queensland

View shared research outputs
Researchain Logo
Decentralizing Knowledge