Michael B. Donnelly
University of Kentucky
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Michael B. Donnelly.
The Diabetes Educator | 1991
Martha M. Funnell; Robert M. Anderson; Marilynn S. Arnold; Patricia A. Barr; Michael B. Donnelly; Patricia D. Johnson; Denise Taylor-Moon; Neil H. White
We have learned much in the past 10 years about how to help patients to acquire diabetes-related knowledge and skills and how to use strategies to help patients change behaviors. However, the application ofknowledge and techniques should be guided by a relevant, coherent, educational philosophy. Empowerment offers a practical conceptual framework for diabetes patient education. Empowering patients provides them with the knowledge, skills, and responsibility to effect change and has the potential to promote overall health and maximize the use of available resources. It is an idea whose time has come for diabetes education.
Annals of Surgery | 1995
David A. Sloan; Michael B. Donnelly; Richard W. Schwartz; William E. Strodel
ObjectiveThe authors determine the reliability, validity, and usefulness of the Objective Structured Clinical Examination (OSCE) in the evaluation of surgical residents. Summary Background DataInterest is increasing in using the OSCE as a measurement of clinical competence and as a certification tool. However, concerns exist about the reliability, feasibility, and cost of the OSCE. Experience with the OSCE in postgraduate training programs is limited. MethodsA comprehensive 38-station OSCE was administered to 56 surgical residents. Residents were grouped into three levels of training: interns, junior residents, and senior residents. The reliability of the examination was assessed by coefficient α; its validity, by the construct of experience. Differences between training levels and in performance on the various OSCE problems were determined by a three-way analysis of variance with two repeated measures and the Student-Newman-Keuls post hoc test. Pearson correlations were used to determine the relationship between OSCE and American Board of Surgery in-Training Examination (ABSITE) scores. ResultsThe reliability of the OSCE was very high (0.91). Performance varied significantly according to level of training (postgraduate year; p < 0.0001). Senior residents performed best, and interns performed worst. The OSCE problems differed significantly in difficulty (p < 0.0001). Overall scores were poor. Important and specific performance deficits were identified at all levels of training. The ABSTTE clinical scores, unlike the basic science scores, correlated modestly with the OSCE scores when level of training was held constant. ConclusionThe OSCE is a highly reliable and valid clinical examination that provides unique information about the performance of individual residents and the quality of postgraduate training programs.
Teaching and Learning in Medicine | 1993
Porter Mayo; Michael B. Donnelly; Phyllis P. Nash; Richard W. Schwartz
This study identified the characteristics of effective tutors in a problem‐based learning (PBL) educational setting. Forty‐four junior medical students participated in two 6‐week PBL groups and evaluated their tutors based on a list of 12 characteristics. Statistical analyses of the students’ responses revealed that faculty members differed significantly in their possession of tutor skills, in the way they carried out the tutor skills, and in their performance of group‐management skills. Tutors were rated highest on participation in the sessions, enthusiasm, and level of comfort outside their area of expertise. They were rated lowest on providing feedback to the group and promoting psychosocial issues. The results indicate that students are highly satisfied with overall tutor performance despite significant differences among tutors. Two important characteristics of the effective tutor were identified: (a) helping students identify important issues and (b) providing feedback to students while encouraging f...
Pain | 1996
Paul A. Sloan; Michael B. Donnelly; Richard W. Schwartz; David A. Sloan
&NA; Pain control for cancer is a significant problem in health care, and lack of expertise by clinicians in assessing and managing cancer pain is an important cause of inadequate pain management. This study was designed to use performance‐based testing to evaluate the skills of resident physicians in assessing and managing the severe chronic pain of a cancer patient. Thirty‐three resident physicians (PGY 1–6) were presented with the same standardized severe cancer pain patient and asked to complete a detailed pain assessment. The residents then completed questions related to management of the cancer pain patient. In the cancer pain assessment, residents did well in assessing pain onset (70%), temporal pattern of pain (64%), and pain location (73%). However, only 33% and 45% physicians adequately assessed the pain description and pain intensity, respectively, and assessment of pain‐relieving factors, previous pain history, and psychosocial history was done poorly or not at all by 70%, 88%, and 94% of residents. Only 58% of the residents were judged to be competent in this clinical cancer pain assessment. In the cancer pain management section, opioid analgesic therapy was prescribed by 98% of residents, and 91% used the oral route. However, only 18% of prescriptions were for regular use and 88% of residents did not provide analgesics for breakthrough pain. A significant number of graduated physicians were judged to be not competent in the assessment and management of the severe pain of a standardized cancer patient. Opioids and NSAIDs were the analgesics of choice; however, most were prescribed on a PRN basis only. Co‐analgesics were rarely prescribed. Few physicians managed persistent, severe cancer pain according to the WHO guideline of increasing the opioid dose. The lack of significant difference in scores between junior and senior residents suggest that adequate cancer pain management is not being effectively taught in postgraduate training programs.
Journal of Gastrointestinal Surgery | 2002
Adrian Park; Donald B. Witzke; Michael B. Donnelly
Patient preference has driven the adoption of minimally invasive surgery (MIS) techniques and altered surgical practice. MIS training in surgical residency programs must teach new skill sets with steep learning curves to enable residents to master key procedures. Because no nationally recognized MIS curriculum exists, this study asked experts in MIS which laparoscopic procedures should be taught and how many cases are required for competency. Expert recommendations were compared to the number of cases actually performed by residents (Residency Review Committee [RRC] data). A detailed survey was sent nationwide to all surgical residency programs (academic and private) known to offer training in MIS and/or have a leader in the field. The response rate was approximately 52%. RRC data were obtained from the resident statistics summary report for 1998–1999. Experts identified core procedures for MIS training and consistently voiced the opinion that to become competent, residents need to perform these procedures many more times than the RRC data indicate they currently do. At present, American surgical residency programs do not meet the suggested MIS case range or volume required for competency. Residency programs need to be restructured to incorporate sufficient exposure to core MIS procedures. More expert faculty must be recruited to train residents to meet the increasing demand for laparoscopy.
Surgery | 1998
Richard W. Schwartz; Donald B. Witzke; Michael B. Donnelly; Terry D. Stratton; Amy V. Blue; David A. Sloan
BACKGROUND The Objective Structural Clinical Examination (OSCE) is an objective method for assessing clinical skills and can be used to identify deficits in clinical skill. During the past 5 years, we have administered 4 OSCEs to all general surgery residents and interns. METHODS Two OSCEs (1993 and 1994) were used as broad-based examinations of the core areas of general surgery; subsequent OSCEs (1995 and 1997) were used as needs assessments. For each year, the reliability of the entire examination was calculated with Cronbachs alpha. A reliability-based minimal competence score (MCS) was defined as the mean performance (in percent) minus the standard error of measurement for each group in 1997 (interns, junior residents, and senior residents). RESULTS The reliability of each OSCE was acceptable, ranging from 0.63 to 0.91. The MCS during the 4-year period ranged from 45% to 65%. In 1997, 4 interns, 2 junior residents, and 2 senior residents scored below their groups MCS. MCS for the groups increased across training levels in developmental fashion (P < .05). CONCLUSIONS Given the relatively stable findings observed, we conclude (1) the OSCE can be used to identify group and individual differences reliably in clinical skills, and (2) we continue to use this method to develop appropriate curricular remediation for deficits in both individuals and groups.
Surgical Endoscopy and Other Interventional Techniques | 2003
Gina L. Adrales; Uyen B. Chu; Donald B. Witzke; Michael B. Donnelly; D. Hoskins; Michael J. Mastrangelo; Alejandro Gandsas; Adrian Park
Background: The goal of this study was to develop, test, and validate the efficacy of inexpensive mechanical minimally invasive surgery (MIS) model simulations for training faculty, residents, and medical students. We sought to demonstrate that trained and experienced MIS surgeon raters could reliably rate the MIS skills acquired during these simulations. Methods: We developed three renewable models that represent difficult or challenging segments of laparoscopic procedures; laparoscopic appendectomy (LA), laparoscopic cholecystectomy (LC), and laparoscopic inguinal hernia (LH). We videotaped 10 students, 12 surgical residents, and 1 surgeon receiving training on each of the models and again during their posttraining evaluation session. Five MIS surgeons then assessed the evaluation session performance. For each simulation, we asked them to rate overall competence (COM) and four skills: clinical judgment (respect for tissue) (CJ), dexterity (economy of movement) (DEX), serial/simultaneous complexity (SSC), and spatial orientation (SO). We computed intraclass correlation (ICC) coefficients to determine the extent of agreement (i.e., reliability) among ratings. Results: We obtained ICC values of 0.74, 0.84, and 0.81 for COM ratings on LH, LC, and LA, respectively. We also obtained the following ICC values for the same three models: CJ, 0.75, 0.83, and 0.89; DEX, 0.88, 0.86, and 0.89; SSC, 0.82, 0.82, and 0.82; and SO, 0.86, 0.86, and 0.87, respectively. Conclusions: We obtained very high reliability of performance ratings for competence and surgical skills using a mechanical simulator. Typically, faculty evaluations of residents in the operating room are much less reliable. In contrast, when faculty members observe residents in a controlled, standardized environment, their ratings can be very reliable.
Diabetes Care | 1989
Robert M. Anderson; Michael B. Donnelly; Clarice P. Gressard; Robert F. Dedrick
This article describes the development of a diabetes attitude scale (DAS) that was designed to measure the attitudes of health-care professionals (HCPs). The DAS was developed through the efforts of a national panel of diabetes experts. The panel developed a 60-item scale that was pilot tested and reduced to a 50-item scale. The 50-item scale was then mailed to a national sample of HCPs with an interest in diabetes. The surveys were returned by 633 nurses, 322 dietitians, 116 physicians, and 67 others totaling 1138 returns (a return rate of 54%). The returned surveys were analyzed, and a 31- item DAS composed of 8 subscales resulted. Evidence for the reliability and validity of the 31-item DAS along with the instrument itself are included in this study.
The Diabetes Educator | 1992
Martha M. Funnell; Michael B. Donnelly; Robert M. Anderson; Patricia D. Johnson; Mary S. Oh
To determine the efficacy of and need for patient education methods and media, a needs assessment was sent to 816 members of the American Association of Diabetes Educators. Respondents (n=325, 40%) included 62% RNs, 36% RDs, 1% other; 62% CDEs. Their mean number of years experience in diabetes education was 8.5, and 99% routinely provided patient education. Respondents indicated that videotapes and slide tapes were the most educationally effective media and books and audiotapes were the least effective. Booklets and videotapes were the most cost-effective and computer-assisted instruction the least effective. While respondents perceived one-to-one counseling, skills training, and diabetes content sessions to be the three most educationally effective methods, support groups and large and small discussion groups were seen as the three most cost-effective educational methods. Among nine potential barriers to quality patient education listed, educators rated lack of third-party reimbursement as a major barrier most frequently and national availability of quality education materials as a barrier least frequently.
Academic Medicine | 1989
Michael B. Donnelly; James O. Woolliscroft
&NA; Third‐year medical students used 12 descriptive items to evaluate the teaching skills of first‐year residents, senior medical residents, preceptors (internal medicine fellows), and attending physicians. Intraclass correlations showed that the students were able to judge their instructors reliably. Further analyses were then carried out to determine whether students differentially evaluated the four instructor groups. Three of the descriptive items that related to overall evaluations, as well as the mean rating of all items, indicated no group differences. However, when the groups were compared on specific teaching characteristics (by means of a multiple‐group discriminant function analysis), systematic differences were found. The first function differentiated the groups in terms of the cognitive and experiential characteristics of the instructors, with attending physicians being rated the highest and first‐year residents the lowest. In contrast, the third function separated the groups in terms of interpersonal skills; on this function, the senior medical residents were rated the highest and preceptors the lowest. It is concluded that students make sophisticated judgments in evaluating their clinical teachers.