Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michele P. Pugnaire is active.

Publication


Featured researches published by Michele P. Pugnaire.


Journal of General Internal Medicine | 2006

Learning from Mistakes: Factors that Influence How Students and Residents Learn from Medical Errors

Melissa A. Fischer; Kathleen M. Mazor; Joann L. Baril; Eric J. Alper; Deborah M. DeMarco; Michele P. Pugnaire

AbstractCONTEXT: Trainees are exposed to medical errors throughout medical school and residency. Little is known about what facilitates and limits learning from these experiences. OBJECTIVE: To identify major factors and areas of tension in trainees’ learning from medical errors. DESIGN, SETTING, AND PARTICIPANTS: Structured telephone interviews with 59 trainees (medical students and residents) from 1 academic medical center. Five authors reviewed transcripts of audiotaped interviews using content analysis. RESULTS: Trainees were aware that medical errors occur from early in medical school. Many had an intense emotional response to the idea of committing errors in patient care. Students and residents noted variation and conflict in institutional recommendations and individual actions. Many expressed role confusion regarding whether and how to initiate discussion after errors occurred. Some noted the conflict between reporting errors to seniors who were responsible for their evaluation. Learners requested more open discussion of actual errors and faculty disclosure. No students or residents felt that they learned better from near misses than from actual errors, and many believed that they learned the most when harm was caused. CONCLUSIONS: Trainees are aware of medical errors, but remaining tensions may limit learning. Institutions can immediately address variability in faculty response and local culture by disseminating clear, accessible algorithms to guide behavior when errors occur. Educators should develop longitudinal curricula that integrate actual cases and faculty disclosure. Future multi-institutional work should focus on identified themes such as teaching and learning in emotionally charged situations, learning from errors and near misses and balance between individual and systems responsibility.


Academic Medicine | 2004

Teaching communication in clinical clerkships: models from the Macy initiative in health communications

Adina Kalet; Michele P. Pugnaire; Kathy Cole-Kelly; Regina Janicik; Emily Ferrara; Mark D. Schwartz; Mack Lipkin; Aaron Lazare

Medical educators have a responsibility to teach students to communicate effectively, yet ways to accomplish this are not well-defined. Sixty-five percent of medical schools teach communication skills, usually in the preclinical years; however, communication skills learned in the preclinical years may decline by graduation. To address these problems the New York University School of Medicine, Case Western Reserve University School of Medicine, and the University of Massachusetts Medical School collaborated to develop, establish, and evaluate a comprehensive communication skills curriculum. This work was funded by the Josiah P. Macy, Jr. Foundation and is therefore referred to as the Macy Initiative in Health Communication. The three schools use a variety of methods to teach third-year students in each school a set of effective clinical communication skills. In a controlled trial this cross-institutional curriculum project proved effective in improving communication skills of third-year students as measured by a comprehensive, multistation, objective structured clinical examination. In this paper the authors describe the development of this unique, collaborative initiative. Grounded in a three-school consensus on the core skills and critical components of a communication skills curriculum, this article illustrates how each school tailored the curriculum to its own needs. In addition, the authors discuss the lessons learned from conducting this collaborative project, which may provide guidance to others seeking to establish effective cross-disciplinary skills curricula.


International Journal of Impotence Research | 2003

Sexual health innovations in undergraduate medical education

Emily Ferrara; Michele P. Pugnaire; Julie A. Jonassen; Katherine K. O'Dell; Marjorie Clay; David S. Hatem; Michele M. Carlin

Recent national and global initiatives have drawn attention to the importance of sexual health to individuals’ well-being. These initiatives advocate enhancement of efforts to address this under-represented topic in health professions curricula. University of Massachusetts Medical School (UMMS) has undertaken a comprehensive effort to develop an integrated curriculum in sexual health. The UMMS project draws upon the expertise of a multidisciplinary faculty of clinicians, basic scientists, a medical ethicist, and educators. This article describes the projects genesis and development at UMMS, and reports on three innovations in sexual health education implemented as part of this endeavor.


Journal of Neurochemistry | 1977

Comparison of the 'binding' of beta-alanine and gamma-aminobutyric acid in synaptosomal-mitochondrial fractions of rat brain.

E. Somoza; Michele P. Pugnaire; L. M. Munoz; C. G. Portal; A. E. Ibanez; F. V. DeFeudis

Abstract— Na+‐dependent ‘binding’ of β‐alanine and GABA was examined with synaptosomal‐mitochondrial fractions of rat brain incubated for 10 min at 0°C. GABA was bound to a much greater extent than β‐alanine to particles of cerebral cortex, whole cerebellum and brain stem. For cerebral cortex, the binding capacity (Bmax) for GABA was about 18 limes greater than that for β‐alanine. and the affinity of the particles for GABA was about 2′ times greater than for β‐alanine. The order of potency of GABA binding to brain regions was cerebral cortex > cerebellum > brain stem, whereas that for β‐alanine was the reverse. If the binding of β‐alanine is taken to indicate the glial component of the Na+‐dependent binding process for GABA, then most of the GABA was bound to neuronal elements under the conditions employed.


Teaching and Learning in Medicine | 2010

Using standardized patients to assess professionalism: a generalizability study

Mary L. Zanetti; Lisa A. Keller; Kathleen M. Mazor; Michele M. Carlin; Eric J. Alper; David S. Hatem; Wendy L. Gammon; Michele P. Pugnaire

Background: Assessment of professionalism in undergraduate medical education is challenging. One approach that has not been well studied in this context is performance-based examinations. Purpose: This study sought to investigate the reliability of standardized patients’ scores of students’ professionalism in performance-based examinations. Methods: Twenty students were observed on 4 simulated cases involving professional challenges; 9 raters evaluated each encounter on 21 professionalism items. Correlational and multivariate generalizability (G) analyses were conducted. Results: G coefficients were .75, .53, and .68 for physicians, standardized patients (SPs), and lay raters, respectively. Composite G coefficient for all raters reached acceptable level of .86. Results indicated SP raters were more variable than other rater types in severity with which they rated students, although rank ordering of students was consistent among SPs. Conclusions: SPs’ ratings were less reliable and consistent than physician or lay ratings, although the SPs rank ordered students more consistently than the other rater types.


Academic Medicine | 2004

Tracking the Longitudinal Stability of Medical Students’ Perceptions Using the AAMC Graduation Questionnaire and Serial Evaluation Surveys

Michele P. Pugnaire; Urip Purwono; Mary L. Zanetti; Michele M. Carlin

Background. This study examined the longitudinal stability of students’ perceptions by comparing ratings on similar survey items in three sequential evaluations: end-of-clerkship (EOC), AAMC graduation questionnaire (GQ), and a postgraduate survey (PGY1). Method. For the classes of 2000 and 2001, ratings were compiled from EOC evaluations and comparable items from the GQ. For both cohorts, selected GQ items were included in the PGY1 survey and these ratings were compiled. Matched responses from EOC versus GQ and PGY1 versus GQ were compared. Results. Proportions of “excellent” ratings were consistent across EOC and GQ surveys for all clerkships. Comparison of GQ and PGY1 ratings revealed significant differences in only seven of 31 items. Conclusion. Student perceptions as measured by GQ ratings are notably consistent across the clinical years and internship. This longitudinal stability supports the usefulness of the GQ in programmatic assessment and reinforces its value as a measure of student satisfaction.


Teaching and Learning in Medicine | 2011

Global Longitudinal Pathway: Has Medical Education Curriculum Influenced Medical Students’ Skills and Attitudes Toward Culturally Diverse Populations?

Mary L. Zanetti; Michael A. Godkin; Joshua P. Twomey; Michele P. Pugnaire

Background: The Pathway represents a longitudinal program for medical students, consisting of both domestic and international experiences with poor populations. A previous study reported no significant attitudinal changes toward the medically indigent between Pathway and non-Pathway students. Purpose: The purpose of this study was to investigate and differentiate the skills and attitudes of Pathway and non-Pathway students in working with culturally diverse populations by conducting quantitative and qualitative analyses. Methods: Selected items from a cultural assessment were analyzed using independent t-tests and a proportional analysis using approximation of the binomial distribution. In addition, a qualitative assessment of non-Pathway and Pathway students was conducted. Results: A statistically significant difference was found at the end of Years 2, 3, and 4 regarding student confidence ratings, and qualitative results had similar findings. Conclusions: Clear and distinct differences between the two studied groups were found indicating the root of this increased confidence may have developed due to exposure to the Pathway program.


Academic Medicine | 2000

An investigation of the impacts of different generalizability study designs on estimates of variance components and generalizability coefficients.

Lisa A. Keller; Kathleen M. Mazor; H. Swaminathan; Michele P. Pugnaire

In recent years, performance assessments have become increasingly popular in medical education. While the term ‘‘performance assessment’’ can be applied to many different types of assessments, in medical education this term usually refers to some sort of simulated patient encounter, such as an objective structured clinical examination (OSCE) or a computer simulation of an encounter. These types of assessments appeal to many educators because the tasks or items used are often seen as more realistic than items on multiple-choice examinations. However, this increased ‘‘realism’’ or apparent authenticity comes at a cost—performance examinations are typically more time-consuming and expensive both to administer and to score. On an OSCE, each encounter with a standardized patient is typically scored as a single item, often resulting in an examinee’s completing only four to eight items in a two-hour testing period. In contrast, an examinee might complete 100 to 150 items during a two-hour multiple-choice examination. The fact that performance examinations are typically relatively short means that test users must pay particular attention to the reliability and validity of test scores. In general, other things being equal, a shorter test will result in scores that are less reliable than a longer test. Lower reliability reflects greater error. Adding more items is one way that test developers may increase reliability. On a multiple-choice test, it is relatively inexpensive to write and administer additional items. However, on a performance test both the development and administration of even a single new item can be expensive, and often must be justified in terms of expected gains in score precision. A second consideration in performance examinations is that scoring is typically more difficult and expensive than scoring of multiple-choice examinations. Expert or trained raters are generally required to review each performance or a sample of performances. Such ratings may be used to score specific performances or to develop scoring criteria or weighting schemes. In either case, raters are a potential source of error. Generalizability theory provides a framework for estimating the relative magnitudes of various sources of error in a set of scores. In most performance assessments, both items and raters are potential sources of error. Generalizability theory allows estimation of the error associated with each of these sources separately, as well as the relevant interaction effects. In a generalizability study (G study), the variance in a set of scores is partitioned in a manner similar to that used in the analysis of variance. However, in a G study the emphasis is not on testing for statistical significance, but rather on assessing the relative magnitudes of the variance components. Depending on the study design, different variance components can be estimated. Once the variance components are estimated, additional analyses can be conducted. In the framework of generalizability theory, the second stage of analysis is referred to as a decision study (D study). In a D study, the estimated variance components are used to estimate generalizability coefficients (comparable to reliability coefficients) under various measurement conditions. Thus, using the results from a single test administration, it is possible to estimate the impacts of changing both the number of raters and the number of items. This is an important benefit of conducting analyses based on generalizability theory. However, it must be stressed that the variance components and G coefficients are estimates, and as such will vary depending on the specific sample used. Given that the results of generalizability analyses are often used to make practical decisions about test implementation, it is important to collect the data for a G study in a way that will maximize the precision of the variance-components estimates. Given also that performance assessments are costly to administer and score, and that resources (time, raters, and money) are typically limited, the question of how available resources should be allocated for a G study is an important one. Is it preferable to collect data from 100 examinees on 16 items, or 200 examinees on eight items? Should four raters score 50 examinee performances, or should two raters score 100 performances? Decision studies may help to inform these types of decisions after the data are collected and analyzed, but D studies are based on G studies. To date there is no research we are aware of to help in planning data collection for a G study, especially under constraints. The purpose of the present study was to examine the impacts of different G-study designs. All of the designs simulated here contain the same number of data points, but the distributions of the data points over examinees, items, and raters are varied. By starting with a relatively large data set (200 medical student examinees, completing 16 items each, scored by four raters each for a total of 12,800 data points), we were able to conduct repeated sampling of different data-collection conditions and to construct empirical confidence intervals for variance components estimates. Computed confidence intervals were also constructed and compared with the empirically constructed intervals. A series of D studies was then conducted to illustrate how different sampling strategies and different samples within those strategies could have substantial impacts on the decisions that would be likely to be made based on such analyses. It should be stressed that the focus of this study was to illustrate the impacts of various sampling strategies, rather than to make decisions about this particular data set. We hope to inform and remind test designers and users that estimates are based on samples, and as such contain variability, and to illustrate the extent to which that variability is greatly affected by the data-collection procedure used.


Academic Medicine | 1996

An interdisciplinary course on domestic and family violence

Julie A. Jonassen; T. N. Burwick; Michele P. Pugnaire

No abstract available.


Academic Medicine | 1998

A first-year minicurriculum on TIA/stroke.

Susan Billings-Gagliardi; Nancy M. Fontneau; Michele P. Pugnaire

No abstract available.

Collaboration


Dive into the Michele P. Pugnaire's collaboration.

Top Co-Authors

Avatar

Mary L. Zanetti

University of Massachusetts Medical School

View shared research outputs
Top Co-Authors

Avatar

Michele M. Carlin

University of Massachusetts Medical School

View shared research outputs
Top Co-Authors

Avatar

Eric J. Alper

University of Massachusetts Medical School

View shared research outputs
Top Co-Authors

Avatar

Kathleen M. Mazor

University of Massachusetts Medical School

View shared research outputs
Top Co-Authors

Avatar

Susan V. Barrett

University of Massachusetts Medical School

View shared research outputs
Top Co-Authors

Avatar

Wendy L. Gammon

University of Massachusetts Medical School

View shared research outputs
Top Co-Authors

Avatar

Julie A. Jonassen

University of Massachusetts Medical School

View shared research outputs
Top Co-Authors

Avatar

Laura A. Sefton

University of Massachusetts Medical School

View shared research outputs
Top Co-Authors

Avatar

David S. Hatem

University of Massachusetts Medical School

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge