Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gary L. Beck Dallaghan is active.

Publication


Featured researches published by Gary L. Beck Dallaghan.


Academic Psychiatry | 2017

Medical School Factors Associated with Higher Rates of Recruitment into Psychiatry

John J. Spollen; Gary L. Beck Dallaghan; Gregory W. Briscoe; Nancy D. Delanoche; Deborah J. Hales

ObjectiveThe medical school a student attends appears to be a factor in whether students eventually match into psychiatry. Knowledge of which factors are associated with medical schools with higher recruitment rates into psychiatry may assist in developing strategies to increase recruitment.MethodsPsychiatry leaders in medical student education in the 25 highest and lowest recruiting US allopathic schools were surveyed concerning various factors that could be important such as curriculum, educational leadership, and presence of anti-psychiatry stigma. The relationship between the survey results of high recruiting schools versus those of low recruiting schools was evaluated using Mann-Whitney U tests.ResultsFactors significantly associated (p < .05) with higher recruiting schools included better reputation of the psychiatry department and residents, perceived higher respect for psychiatry among non-psychiatry faculty, less perception that students dissuaded other students from pursuing psychiatry, and longer clerkship length.ConclusionsEducational culture and climate factors may have a significant impact on psychiatry recruitment rates. Clerkship length was associated with higher recruiting schools, but several previous studies with more complete samples have not shown this.


Teaching and Learning in Medicine | 2016

The Community Preceptor Crisis: Recruiting and Retaining Community-Based Faculty to Teach Medical Students—A Shared Perspective From the Alliance for Clinical Education

Jennifer G. Christner; Gary L. Beck Dallaghan; Gregory W. Briscoe; Petra M. Casey; Ruth Marie E Fincher; Lynn M. Manfred; Katherine I. Margo; Peter Muscarella; Joshua E. Richardson; Joseph Safdieh; Beat D. Steiner

ABSTRACT Issue: Community-based instruction is invaluable to medical students, as it provides “real-world” opportunities for observing and following patients over time while refining history taking, physical examination, differential diagnosis, and patient management skills. Community-based ambulatory settings can be more conducive to practicing these skills than highly specialized, academically based practice sites. The Association of American Medical Colleges and other national medical education organizations have expressed concern about recruitment and retention of preceptors to provide high-quality educational experiences in community-based practice sites. These concerns stem from constraints imposed by documentation in electronic health records; perceptions that student mentoring is burdensome resulting in decreased clinical productivity; and competition between allopathic, osteopathic, and international medical schools for finite resources for medical student experiences. Evidence: In this Alliance for Clinical Education position statement, we provide a consensus summary of representatives from national medical education organizations in 8 specialties that offer clinical clerkships. We describe the current challenges in providing medical students with adequate community-based instruction and propose potential solutions. Implications: Our recommendations are designed to assist clerkship directors and medical school leaders overcome current challenges and ensure high-quality, community-based clinical learning opportunities for all students. They include suggesting ways to orient community clinic sites for students, explaining how students can add value to the preceptors practice, focusing on educator skills development, recognizing preceptors who excel in their role as educators, and suggesting forms of compensation.


Medical Education Online | 2016

Faculty attitudes about interprofessional education.

Gary L. Beck Dallaghan; Erin Hoffman; Elizabeth Lyden; Catherine Bevil

Background Interprofessional education (IPE) is an important component to training health care professionals. Research is limited in exploring the attitudes that faculty hold regarding IPE and what barriers they perceive to participating in IPE. The purpose of this study was to identify faculty attitudes about IPE and to identify barriers to participating in campus-wide IPE activities. Methods A locally used questionnaire called the Nebraska Interprofessional Education Attitudes Scale (NIPEAS) was used to assess attitudes related to interprofessional collaboration. Questions regarding perceived barriers were included at the end of the questionnaire. Descriptive and non-parametric statistics were used to analyze the results in aggregate as well as by college. In addition, open-ended questions were analyzed using an immersion/crystallization framework to identify themes. Results The results showed that faculty had positive attitudes of IPE, indicating that is not a barrier to participating in IPE activities. Most common barriers to participation were scheduling conflicts ( =19.17, p=0.001), lack of department support ( 4,285=10.09, p=0.039), and lack of awareness of events (=26.38, p=0.000). Narrative comments corroborated that scheduling conflicts are an issue because of other priorities. Those who commented also added to the list of barriers, including relevance of the activities, location, and prior negative experiences. Discussion With faculty attitudes being positive, the exploration of facultys perceived barriers to IPE was considered even more important. Identifying these barriers will allow us to modify our IPE activities from large, campus-wide events to smaller activities that are longitudinal in nature, embedded within current curriculum and involving more authentic experiences.Background Interprofessional education (IPE) is an important component to training health care professionals. Research is limited in exploring the attitudes that faculty hold regarding IPE and what barriers they perceive to participating in IPE. The purpose of this study was to identify faculty attitudes about IPE and to identify barriers to participating in campus-wide IPE activities. Methods A locally used questionnaire called the Nebraska Interprofessional Education Attitudes Scale (NIPEAS) was used to assess attitudes related to interprofessional collaboration. Questions regarding perceived barriers were included at the end of the questionnaire. Descriptive and non-parametric statistics were used to analyze the results in aggregate as well as by college. In addition, open-ended questions were analyzed using an immersion/crystallization framework to identify themes. Results The results showed that faculty had positive attitudes of IPE, indicating that is not a barrier to participating in IPE activities. Most common barriers to participation were scheduling conflicts ([Formula: see text] =19.17, p=0.001), lack of department support ([Formula: see text] 4,285=10.09, p=0.039), and lack of awareness of events ([Formula: see text]=26.38, p=0.000). Narrative comments corroborated that scheduling conflicts are an issue because of other priorities. Those who commented also added to the list of barriers, including relevance of the activities, location, and prior negative experiences. Discussion With faculty attitudes being positive, the exploration of facultys perceived barriers to IPE was considered even more important. Identifying these barriers will allow us to modify our IPE activities from large, campus-wide events to smaller activities that are longitudinal in nature, embedded within current curriculum and involving more authentic experiences.


The Journal of Pediatrics | 2017

United States Medical Licensing Examination and American Board of Pediatrics Certification Examination Results: Does the Residency Program Contribute to Trainee Achievement

Thomas R. Welch; Brad G. Olson; Elizabeth K. Nelsen; Gary L. Beck Dallaghan; Gloria Kennedy; Ann S. Botash

Objective To determine whether training site or prior examinee performance on the US Medical Licensing Examination (USMLE) step 1 and step 2 might predict pass rates on the American Board of Pediatrics (ABP) certifying examination. Study design Data from graduates of pediatric residency programs completing the ABP certifying examination between 2009 and 2013 were obtained. For each, results of the initial ABP certifying examination were obtained, as well as results on National Board of Medical Examiners (NBME) step 1 and step 2 examinations. Hierarchical linear modeling was used to nest first‐time ABP results within training programs to isolate program contribution to ABP results while controlling for USMLE step 1 and step 2 scores. Stepwise linear regression was then used to determine which of these examinations was a better predictor of ABP results. Results A total of 1110 graduates of 15 programs had complete testing results and were subject to analysis. Mean ABP scores for these programs ranged from 186.13 to 214.32. The hierarchical linear model suggested that the interaction of step 1 and 2 scores predicted ABP performance (F[1,1007.70] = 6.44, P = .011). By conducting a multilevel model by training program, both USMLE step examinations predicted first‐time ABP results (b = .002, t = 2.54, P = .011). Linear regression analyses indicated that step 2 results were a better predictor of ABP performance than step 1 or a combination of the two USMLE scores. Conclusions Performance on the USMLE examinations, especially step 2, predicts performance on the ABP certifying examination. The contribution of training site to ABP performance was statistically significant, though contributed modestly to the effect compared with prior USMLE scores.


Academic Psychiatry | 2017

Before You Send Out that Survey: The Nuts and Bolts of Implementing a Medical Student Survey Study

Jeffrey J. Rakofsky; Gary L. Beck Dallaghan

Survey studies are commonly used in medical student education research. From 2011 to 2012, 24% of the articles published in Medical Teacher included surveys in study design [1]. For those without much research experience, survey studies may seem intuitive and less intimidating than alternative forms of research. However, poorly planned survey studies can lead to inaccurate conclusions. This was observed in Gallup’s election survey conducted in the final week before the 2012 US presidential elections. Their survey incorrectly predicted Mitt Romney would win the election. Upon review, Gallup identified a number of problems in their study design that may have explained their results: They used a nonstandardized sampling strategy, they misidentified the likely voters, regions of the country were underrepresented, and there was a faulty representation of race and ethnicity [2]. Medical educators who intend to conduct survey-based research studies must understand and apply the basics of survey study design and survey development. Doing so will increase the validity of their results and reduce potential criticism from a journal’s peer reviewers. In this primer, we will assist the novice medical educator-researcher by reviewing the nuts and bolts of developing a survey study of medical students (see Table 1). Analyzing and interpreting survey results is beyond the scope of this review and will not be included as part of this discussion. Study Design


Medical Education Online | 2016

Does student performance on preclinical OSCEs relate to clerkship grades

Margot Chima; Gary L. Beck Dallaghan

Background Objective structured clinical examinations (OSCEs) have been used to assess the clinical competence and interpersonal skills of healthcare professional students for decades. However, the relationship between preclinical (second year or M2) OSCE grades and clerkship performance had never been evaluated, until it was explored to provide information to educators at the University of Nebraska Medical Center (UNMC). In addition, the relationship between M2 OSCE communication scores (which is a portion of the total score) and third-year (M3) Internal Medicine (IM) clerkship OSCE scores was also explored. Lastly, conflicting evidence exists about the relationship between the amount of previous clinical experience and OSCE performance. Therefore, the relationship between M3 IM clerkship OSCE scores and the timing of the clerkship in the academic year was explored. Methods Data from UNMC M2 OSCEs and M3 IM clerkship OSCEs were obtained for graduates of the 2013 and 2014 classes. Specifically, the following data points were collected: M2 fall OSCE total, M2 fall OSCE communication; M2 spring OSCE total, M2 spring OSCE communication; and M3 IM clerkship OSCE total percentages. Data were organized by class, M3 IM clerkship OSCE performance, and timing of the clerkship. Microsoft Excel and SPSS were used for data organization and analysis. Results Of the 245 records, 229 (93.5%) had data points for all metrics of interest. Significant differences between the classes of 2013 and 2014 existed for average M2 spring total, M2 spring communication, and M3 IM clerkship OSCEs. Retrospectively, there were no differences in M2 OSCE performances based on how students scored on the M3 IM clerkship OSCE. M3 IM clerkship OSCE performance improved for those students who completed the clerkship last in the academic year. Conclusions There were inconsistencies in OSCE performances between the classes of 2013 and 2014, but more information is needed to determine if this is because of testing variability or heterogeneity from class to class. Although there were no differences in preclinical scores based on M3 IM clerkship OSCE scores, students would benefit from a longitudinal review of their OSCE performance over their medical training. Additionally, students may benefit from more reliable and valid forms of assessing communication. In general, students who take the IM clerkship last in the academic year performed better on the required OSCE. More information is needed to determine why this is seen only at the end of the year.


MedEdPORTAL | 2018

Innovation to Dissemination Workshop: Selecting Outcome Measures to Translate Educational Innovations Into Scholarship

Michael S. Ryan; Patricia Quigley; Clifton Lee; Ian Chua; Caroline R. Paul; Joseph Gigante; Gary L. Beck Dallaghan

Introduction Curricular innovations are invaluable to the improvement of medical education programs, and thus, their dissemination to broader audiences is imperative. However, medical educators often struggle to translate innovative ideas into scholarly pursuits due to a lack of experience or expertise in selecting outcome measures that demonstrate impact. A recent national call for increased focus on outcome measures for medical education research highlights the need for more training in this area. Methods We developed a 2-hour interactive workshop to improve educator ability to identify outcome measures for educational innovations. This workshop was delivered at a national pediatrics educational conference and at three local institutional faculty development sessions. Results Participants were diverse in terms of experience, expertise, and roles within their educational programs. Participants rated the workshop positively in each setting and identified next steps in developing their own products of educational scholarship. Discussion This workshop can provide faculty and faculty developers with a template for developing a skill set in identifying outcome measures and pairing them with educational innovations.


Journal of Medical Education and Curricular Development | 2018

Feedback Quality Using an Observation Form

Gary L. Beck Dallaghan; Joy Higgins; Adam Reinhardt

Background: Direct observations with focused feedback are critical components for medical student education. Numerous challenges exist in providing useful comments to students during their clerkships. Students’ evaluations of the clerkship indicated they were not receiving feedback from preceptors or house officers. Objective: To encourage direct observation with feedback, Structured Patient Care Observation (SPCO) forms were used to evaluate third-year medical students during patient encounters. Design: In 2014-2015, third-year medical students at a Midwestern medical school completing an 8-week pediatrics clerkship provided experiences on inpatient wards and in ambulatory clinics. Students were expected to solicit feedback using the SPCO form. Results/Findings: A total of 121 third-year medical students completed the pediatrics clerkship. All of the students completed at least one SPCO form. Several students had more than one observation documented, resulting in 161 SPCOs submitted. Eight were excluded for missing data, leaving 153 observations for analysis. Encounter settings included hospital (70), well-child visits (34), sick visits (41), not identified (8). Observers included attending physicians (88) and residents (65). The SPCOs generated 769 points of feedback, comments coalesced into themes of patient interviews, physical examination, or communication with patients and family. Once themes were identified, comments within each theme were further categorized as either actionable or reinforcing feedback. Discussion: SPCOs provided a structure to receive formative feedback from clinical supervisors. Within each theme, reinforcing feedback and actionable comments specific enough to be useful in shaping future encounters were identified.


Academic Psychiatry | 2018

Measuring Burnout Among Psychiatry Clerkship Directors

Jeffrey J. Rakofsky; Gary L. Beck Dallaghan; Richard Balon

ObjectiveThe primary purpose of this study was to determine the prevalence of burnout among Psychiatry clerkship directors.MethodsPsychiatry clerkship directors were solicited via email to complete an electronic version of the Maslach Burnout Inventory-General Survey and the Respondent Information Form.ResultsFifty-four out of 110 surveys (49%) were completed. Fourteen percent of respondents scored in the “high exhaustion” category, 21.6% scored in the “low professional efficacy” category, 20.4% scored in the “high cynicism” category, and 15.1% of respondents met threshold for at least two of the three categories. Those who scored in the “low professional efficacy” category reported higher levels of salary support for research, while those who scored in the “high cynicism” category reported lower levels of salary support at a trend level. Those who scored in the “high cynicism” category were younger.ConclusionsApproximately 14–22 percent of psychiatry clerkship directors reported some level of burnout depending on the subscale used. Future studies should aim to better identify those clerkship directors who are at greatest risk for becoming burned out by their educational role and to clarify the link between salary support for research, age, and burnout.


Journal of The Medical Library Association | 2017

Characteristics of multi-institutional health sciences education research: a systematic review

Jocelyn Schiller; Gary L. Beck Dallaghan; Terry Kind; Heather McLauchlan; Joseph Gigante; Sherilyn Smith

Objectives: Multi-institutional research increases the generalizability of research findings. However, little is known about characteristics of collaborations across institutions in health sciences education research. Using a systematic review process, the authors describe characteristics of published, peer-reviewed multi-institutional health sciences education research to inform educators who are considering such projects. Methods: Two medical librarians searched MEDLINE, the Education Resources Information Center (ERIC), EMBASE, and CINAHL databases for English-language studies published between 2004 and 2013 using keyword terms related to multi-institutional systems and health sciences education. Teams of two authors reviewed each study and resolved coding discrepancies through consensus. Collected data points included funding, research network involvement, author characteristics, learner characteristics, and research methods. Data were analyzed using descriptive statistics. Results: One hundred eighteen of 310 articles met inclusion criteria. Sixty-three (53%) studies received external and/or internal financial support (87% listed external funding, 37% listed internal funding). Forty-five funded studies involved graduate medical education programs. Twenty (17%) studies involved a research or education network. Eighty-five (89%) publications listed an author with a master’s degree or doctoral degree. Ninety-two (78%) studies were descriptive, whereas 26 studies (22%) were experimental. The reported study outcomes were changes in student attitude (38%; n=44), knowledge (26%; n=31), or skill assessment (23%; n=27), as well as patient outcomes (9%; n=11). Conclusions: Multi-institutional descriptive studies reporting knowledge or attitude outcomes are highly published. Our findings indicate that funding resources are not essential to successfully undertake multi-institutional projects. Funded studies were more likely to originate from graduate medical or nursing programs.

Collaboration


Dive into the Gary L. Beck Dallaghan's collaboration.

Top Co-Authors

Avatar

Adam Reinhardt

University of Nebraska Medical Center

View shared research outputs
Top Co-Authors

Avatar

Catherine Bevil

University of Nebraska–Lincoln

View shared research outputs
Top Co-Authors

Avatar

Elizabeth Lyden

University of Nebraska–Lincoln

View shared research outputs
Top Co-Authors

Avatar

Gregory W. Briscoe

Eastern Virginia Medical School

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joy Higgins

University of Nebraska Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ann S. Botash

State University of New York Upstate Medical University

View shared research outputs
Top Co-Authors

Avatar

Beat D. Steiner

University of North Carolina at Chapel Hill

View shared research outputs
Researchain Logo
Decentralizing Knowledge