Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Debra Pugh is active.

Publication


Featured researches published by Debra Pugh.


Medical Education | 2014

Progress testing: is there a role for the OSCE?

Debra Pugh; Claire Touchie; Timothy J. Wood; Susan Humphrey-Murto

The shift from a time‐based to a competency‐based framework in medical education has created a need for frequent formative assessments. Many educational programmes use some form of written progress test to identify areas of strength and weakness and to promote continuous improvement in their learners. However, the role of performance‐based assessments, such as objective structured clinical examinations (OSCEs), in progress testing remains unclear.


Medical Education | 2014

Supervising incoming first-year residents: faculty expectations versus residents' experiences

Claire Touchie; Andr e De Champlain; Debra Pugh; Steven M. Downing; Georges Bordage

First‐year residents begin clinical practice in settings in which attending staff and senior residents are available to supervise their work. There is an expectation that, while being supervised and as they become more experienced, residents will gradually take on more responsibilities and function independently.


Medical Teacher | 2016

The OSCE progress test – Measuring clinical skill development over residency training

Debra Pugh; Claire Touchie; Susan Humphrey-Murto; Timothy J. Wood

Abstract Purpose: The purpose of this study was to explore the use of an objective structured clinical examination for Internal Medicine residents (IM-OSCE) as a progress test for clinical skills. Methods: Data from eight administrations of an IM-OSCE were analyzed retrospectively. Data were scaled to a mean of 500 and standard deviation (SD) of 100. A time-based comparison, treating post-graduate year (PGY) as a repeated-measures factor, was used to determine how residents’ performance progressed over time. Results: Residents’ total IM-OSCE scores (n = 244) increased over training from a mean of 445 (SD = 84) in PGY-1 to 534 (SD = 71) in PGY-3 (p < 0.001). In an analysis of sub-scores, including only those who participated in the IM OSCE for all three years of training (n = 46), mean structured oral scores increased from 464 (SD = 92) to 533 (SD = 83) (p < 0.001), physical examination scores increased from 464 (SD = 82) to 520 (SD = 75) (p < 0.001), and procedural skills increased from 495 (SD = 99) to 555 (SD = 67) (p = 0.033). There was no significant change in communication scores (p = 0.97). Conclusions: The IM-OSCE can be used to demonstrate progression of clinical skills throughout residency training. Although most of the clinical skills assessed improved as residents progressed through their training, communication skills did not appear to change.


Medical Education | 2016

Do OSCE progress test scores predict performance in a national high-stakes examination?

Debra Pugh; Farhan Bhanji; Gary Cole; Jonathan Dupre; Rose Hatala; Susan Humphrey-Murto; Claire Touchie; Timothy J. Wood

Progress tests, in which learners are repeatedly assessed on equivalent content at different times in their training and provided with feedback, would seem to lend themselves well to a competency‐based framework, which requires more frequent formative assessments. The objective structured clinical examination (OSCE) progress test is a relatively new form of assessment that is used to assess the progression of clinical skills. The purpose of this study was to establish further evidence for the use of an OSCE progress test by demonstrating an association between scores from this assessment method and those from a national high‐stakes examination.


Medical Education | 2016

Taking the sting out of assessment: is there a role for progress testing?

Debra Pugh; Glenn Regehr

It has long been understood that assessment is an important driver for learning. However, recently, there has been growing recognition that this powerful driving force of assessment has the potential to undermine curricular efforts. When the focus of assessment is to categorise learners into competent or not (i.e. assessment of learning), rather than being a tool to promote continuous learning (i.e. assessment for learning), there may be unintended consequences that ultimately hinder learning. In response, there has been a movement toward constructing assessment not only as a measurement problem, but also as an instructional design problem, and exploring more programmatic models of assessment across the curriculum. Progress testing is one form of assessment that has been introduced, in part, to attempt to address these concerns. However, in order for any assessment tool to be successful in promoting learning, careful consideration must be given to its implementation.


Simulation in healthcare : journal of the Society for Simulation in Healthcare | 2015

Assessing Procedural Competence: Validity Considerations.

Debra Pugh; Timothy J. Wood; Boulet

Summary Statement Simulation-based medical education (SBME) offers opportunities for trainees to learn how to perform procedures and to be assessed in a safe environment. However, SBME research studies often lack robust evidence to support the validity of the interpretation of the results obtained from tools used to assess trainees’ skills. The purpose of this paper is to describe how a validity framework can be applied when reporting and interpreting the results of a simulation-based assessment of skills related to performing procedures. The authors discuss various sources of validity evidence because they relate to SBME. A case study is presented.


Medical Education | 2015

Use of an error-focused checklist to identify incompetence in lumbar puncture performances

Irene W. Y. Ma; Debra Pugh; Briseida Mema; Mary Brindle; Lara Cooke; Julie N. Stromer

Checklists are commonly used in the assessment of procedural competence. However, on most checklists, high scores are often unable to rule out incompetence as the commission of a few serious procedural errors typically results in only a minimal reduction in performance score. We hypothesised that checklists constructed based on procedural errors may be better at identifying incompetence.


Teaching and Learning in Medicine | 2016

Direct Observation of Clinical Skills Feedback Scale: Development and Validity Evidence

Samantha Halman; Nancy L. Dudek; Timothy J. Wood; Debra Pugh; Claire Touchie; Sean McAleer; Susan Humphrey-Murto

ABSTRACT Construct: This article describes the development and validity evidence behind a new rating scale to assess feedback quality in the clinical workplace. Background: Competency-based medical education has mandated a shift to learner-centeredness, authentic observation, and frequent formative assessments with a focus on the delivery of effective feedback. Because feedback has been shown to be of variable quality and effectiveness, an assessment of feedback quality in the workplace is important to ensure we are providing trainees with optimal learning opportunities. The purposes of this project were to develop a rating scale for the quality of verbal feedback in the workplace (the Direct Observation of Clinical Skills Feedback Scale [DOCS-FBS]) and to gather validity evidence for its use. Approach: Two panels of experts (local and national) took part in a nominal group technique to identify features of high-quality feedback. Through multiple iterations and review, 9 features were developed into the DOCS-FBS. Four rater types (residents n = 21, medical students n = 8, faculty n = 12, and educators n = 12) used the DOCS-FBS to rate videotaped feedback encounters of variable quality. The psychometric properties of the scale were determined using a generalizability analysis. Participants also completed a survey to gather data on a 5-point Likert scale to inform the ease of use, clarity, knowledge acquisition, and acceptability of the scale. Results: Mean video ratings ranged from 1.38 to 2.96 out of 3 and followed the intended pattern suggesting that the tool allowed raters to distinguish between examples of higher and lower quality feedback. There were no significant differences between rater type (range = 2.36–2.49), suggesting that all groups of raters used the tool in the same way. The generalizability coefficients for the scale ranged from 0.97 to 0.99. Item-total correlations were all above 0.80, suggesting some redundancy in items. Participants found the scale easy to use (M = 4.31/5) and clear (M = 4.23/5), and most would recommend its use (M = 4.15/5). Use of DOCS-FBS was acceptable to both trainees (M = 4.34/5) and supervisors (M = 4.22/5). Conclusions: The DOCS-FBS can reliably differentiate between feedback encounters of higher and lower quality. The scale has been shown to have excellent internal consistency. We foresee the DOCS-FBS being used as a means to provide objective evidence that faculty development efforts aimed at improving feedback skills can yield results through formal assessment of feedback quality.


Medical Teacher | 2016

Using cognitive models to develop quality multiple-choice questions

Debra Pugh; André F. De Champlain; Mark J. Gierl; Hollis Lai; Claire Touchie

Abstract With the recent interest in competency-based education, educators are being challenged to develop more assessment opportunities. As such, there is increased demand for exam content development, which can be a very labor-intense process. An innovative solution to this challenge has been the use of automatic item generation (AIG) to develop multiple-choice questions (MCQs). In AIG, computer technology is used to generate test items from cognitive models (i.e. representations of the knowledge and skills that are required to solve a problem). The main advantage yielded by AIG is the efficiency in generating items. Although technology for AIG relies on a linear programming approach, the same principles can also be used to improve traditional committee-based processes used in the development of MCQs. Using this approach, content experts deconstruct their clinical reasoning process to develop a cognitive model which, in turn, is used to create MCQs. This approach is appealing because it: (1) is efficient; (2) has been shown to produce items with psychometric properties comparable to those generated using a traditional approach; and (3) can be used to assess higher order skills (i.e. application of knowledge). The purpose of this article is to provide a novel framework for the development of high-quality MCQs using cognitive models.


Medical Education | 2014

The impact of cueing on written examinations of clinical decision making: a case study

Isabelle Desjardins; Claire Touchie; Debra Pugh; Timothy J. Wood; Susan Humphrey-Murto

Selected‐response (SR) formats (e.g. multiple‐choice questions) and constructed‐response (CR) formats (e.g. short‐answer questions) are commonly used to test the knowledge of examinees. Scores on SR formats are typically higher than scores on CR formats. This difference is often attributed to examinees being cued by options within an SR question, but there could be alternative explanations. The purpose of this study was to expand on previous work with regards to the cueing effect of SR formats by directly contrasting conditions that support cueing versus memory of previously seen questions.

Collaboration


Dive into the Debra Pugh's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Timothy J. Wood

Medical Council of Canada

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge