Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lalena M. Yarris is active.

Publication


Featured researches published by Lalena M. Yarris.


Academic Emergency Medicine | 2009

Attending and resident satisfaction with feedback in the emergency department

Lalena M. Yarris; Judith A. Linden; H. Gene Hern; Cedric Lefebvre; David M. Nestler; Rongwei Fu; Esther K. Choo; Joseph LaMantia; Patrick Brunett

OBJECTIVES Effective feedback is critical to medical education. Little is known about emergency medicine (EM) attending and resident physician perceptions of feedback. The focus of this study was to examine perceptions of the educational feedback that attending physicians give to residents in the clinical environment of the emergency department (ED). The authors compared attending and resident satisfaction with real-time feedback and hypothesized that the two groups would report different overall satisfaction with the feedback they currently give and receive in the ED. METHODS This observational study surveyed attending and resident physicians at 17 EM residency programs through web-based surveys. The primary outcome was overall satisfaction with feedback in the ED, ranked on a 10-point scale. Additional survey items addressed specific aspects of feedback. Responses were compared using a linear generalized estimating equation (GEE) model for overall satisfaction, a logistic GEE model for dichotomized responses, and an ordinal logistic GEE model for ordinal responses. RESULTS Three hundred seventy-three of 525 (71%) attending physicians and 356 of 596 (60%) residents completed the survey. Attending physicians were more satisfied with overall feedback (mean score 5.97 vs. 5.29, p < 0.001) and with timeliness of feedback (odds ratio [OR] = 1.56, 95% confidence interval [CI] = 1.23 to 2.00; p < 0.001) than residents. Attending physicians were also more likely to rate the quality of feedback as very good or excellent for positive feedback, constructive feedback, feedback on procedures, documentation, management of ED flow, and evidence-based decision-making. Attending physicians reported time constraints as the top obstacle to giving feedback and were more likely than residents to report that feedback is usually attending initiated (OR = 7.09, 95% CI = 3.53 to 14.31; p < 0.001). CONCLUSIONS Attending physician satisfaction with the quality, timeliness, and frequency of feedback given is higher than resident physician satisfaction with feedback received. Attending and resident physicians have differing perceptions of who initiates feedback and how long it takes to provide effective feedback. Knowledge of these differences in perceptions about feedback may be used to direct future educational efforts to improve feedback in the ED.


Academic Medicine | 2013

Comparing diagnostic performance and the utility of clinical vignette-based assessment under testing conditions designed to encourage either automatic or analytic thought.

Jonathan S. Ilgen; Judith L. Bowen; Lucas A. McIntyre; Kenny V. Banh; David Barnes; Wendy C. Coates; Jeffrey Druck; Megan L. Fix; Diane Rimple; Lalena M. Yarris; Kevin W. Eva

Purpose Although decades of research have yielded considerable insight into physicians’ clinical reasoning processes, assessing these processes remains challenging; thus, the authors sought to compare diagnostic performance and the utility of clinical vignette-based assessment under testing conditions designed to encourage either automatic or analytic thought. Method This 2011–2012 multicenter randomized study of 393 clinicians (medical students, postgraduate trainees, and faculty) measured diagnostic accuracy on clinical vignettes under two conditions: one encouraged participants to give their first impression (FI), and the other led participants through a directed search (DS) for the correct diagnosis. The authors compared accuracy, feasibility, reliability, and relation to United States Medical Licensing Exam (USMLE) scores under each condition. Results A 2 (instructional condition) × 2 (vignette complexity) × 3 (experience level) analysis of variance revealed no difference in accuracy as a function of instructional condition (F[1,379] = 2.44, P = .12), but demonstrated the expected main effects of vignette complexity (F[1,379] = 965.2, P < .001) and experience (F[2,379] = 39.6, P < .001). Pearson correlations revealed greater associations between assessment scores and USMLE performance in the FI condition than in the DS condition (P < .001). Spearman–Brown calculations consistently indicated that alpha ≥ 0.75 could be achieved more efficiently under the FI condition relative to the DS condition. Conclusions Instructions to trust one’s first impres-sions result in similar performance when compared with instructions to consider clinical information in a systematic fashion, but have greater utility when used for the purposes of assessment.


Pediatric Emergency Care | 2011

Pediatric Educational Needs Assessment for Urban and Rural Emergency Medical Technicians

Ross J. Fleischman; Lalena M. Yarris; Merlin Curry; Stephanie C. Yuen; Alia R. Breon; Garth Meckler

Objective The objective of the study was to identify past experiences, present needs, barriers, and desired methods of training for urban and rural emergency medical technicians. Methods This 62-question pilot-tested written survey was administered at the 2008 Oregon EMS and 2009 EMS for Children conferences. Respondents were compared with registration lists and the state emergency medical services (EMS) database to assess for nonresponder bias. Agencies more than 10 miles from a population of 40,000 were defined as rural. Results Two hundred nineteen (70%) of 313 EMS personnel returned the surveys. Respondents were 3% first responders, 27% emergency medical technician basics, 20% intermediates, and 47% paramedics. Sixty-eight percent were rural, and 32% were urban. Sixty-eight percent reported fewer than 10% pediatric transports. Overall, respondents rated their comfort caring for pediatric patients as 3.1 on a 5-point Likert scale (95% confidence interval, 3.1–3.2). Seventy-two percent reported a mean rating of less than “comfortable” (4 on the scale) across 17 topics in pediatric care, which did not differ by certification level. Seven percent reported no pediatric training in the last 2 years, and 76% desired more. The “quality of available trainings” was ranked as the most important barrier to training; 26% of rural versus 7% of urban EMS personnel ranked distance as the most significant barrier (P < 0.01). Fifty-one percent identified highly realistic simulations as the method that helped them learn best. In the past 2 years, 19% had trained on a highly realistic pediatric simulator. One to 3 hours was the preferred duration for trainings. Conclusions Except for distance as a barrier, there were no significant differences between urban and rural responses. Both urban and rural providers desire resources, in particular, highly realistic simulation, to address the infrequency of pediatric transports and limited training.


Annals of Emergency Medicine | 2016

Examining Reliability and Validity of an Online Score (ALiEM AIR) for Rating Free Open Access Medical Education Resources

Teresa Man Yee Chan; Andrew Grock; Michael Paddock; Kulamakan Kulasegaram; Lalena M. Yarris; Michelle Lin

STUDY OBJECTIVE Since 2014, Academic Life in Emergency Medicine (ALiEM) has used the Approved Instructional Resources (AIR) score to critically appraise online content. The primary goals of this study are to determine the interrater reliability (IRR) of the ALiEM AIR rating score and determine its correlation with expert educator gestalt. We also determine the minimum number of educator-raters needed to achieve acceptable reliability. METHODS Eight educators each rated 83 online educational posts with the ALiEM AIR scale. Items include accuracy, usage of evidence-based medicine, referencing, utility, and the Best Evidence in Emergency Medicine rating score. A generalizability study was conducted to determine IRR and rating variance contributions of facets such as rater, blogs, posts, and topic. A randomized selection of 40 blog posts previously rated through ALiEM AIR was then rated again by a blinded group of expert medical educators according to their gestalt. Their gestalt impression was subsequently correlated with the ALiEM AIR score. RESULTS The IRR for the ALiEM AIR rating scale was 0.81 during the 6-month pilot period. Decision studies showed that at least 9 raters were required to achieve this reliability. Spearman correlations between mean AIR score and the mean expert gestalt ratings were 0.40 for recommendation for learners and 0.35 for their colleagues. CONCLUSION The ALiEM AIR scale is a moderately to highly reliable, 5-question tool when used by medical educators for rating online resources. The score displays a fair correlation with expert educator gestalt in regard to the quality of the resources. The score displays a fair correlation with educator gestalt.


Academic Emergency Medicine | 2011

Education research: A primer for educators in emergency medicine

Lalena M. Yarris; Nicole M. DeIorio

As medical educators strive to adopt an evidence-based, outcomes-driven approach to teaching, education research in emergency medicine (EM) is burgeoning. Many educational challenges prompt specific research questions that are well suited to investigative study, but educators face numerous barriers to translating exciting ideas into research publications. This primer, intended for educators in EM, provides a brief overview of the current scope and essential elements of education research. We present an approach to identifying research problems and conceptual frameworks and defining specific research questions. A common approach to curricular development is reviewed, as well as a fundamental overview of qualitative and quantitative methods that can be applied to educational research questions. Finally, suggestions for disseminating results and overcoming common barriers to conducting research are discussed.


Academic Emergency Medicine | 2011

Effect of an Educational Intervention on Faculty and Resident Satisfaction with Real-time Feedback in the Emergency Department

Lalena M. Yarris; Rongwei Fu; Joseph LaMantia; Judith A. Linden; H. Gene Hern; Cedric Lefebvre; David M. Nestler; Janis P. Tupesis; Nicholas E. Kman

OBJECTIVES Effective real-time feedback is critical to medical education. This study tested the hypothesis that an educational intervention related to feedback would improve emergency medicine (EM) faculty and resident physician satisfaction with feedback. METHODS This was a cluster-randomized, controlled study of 15 EM residency programs in 2007-2008. An educational intervention was created that combined a feedback curriculum with a card system designed to promote timely, effective feedback. Sites were randomized either to receive the intervention or to continue their current feedback method. All participants completed a Web-based survey before and after the intervention period. The primary outcome was overall feedback satisfaction on a 10-point scale. Additional items addressed specific aspects of feedback. Responses were compared using a generalized estimating equations model, adjusting for confounders and baseline differences between groups. The study was designed to achieve at least 80% power to detect a one-point difference in overall satisfaction (α = 0.05). RESULTS Response rates for pre- and postintervention surveys were 65.9 and 47.3% (faculty) and 64.7 and 56.9% (residents). Residents in the intervention group reported a mean overall increase in feedback satisfaction scores compared to those in the control group (mean increase 0.96 points, standard error [SE] ± 0.44, p = 0.03) and significantly higher satisfaction with the quality, amount, and timeliness of feedback. There were no significant differences in mean scores for overall and specific aspects of satisfaction between the faculty physician intervention and control groups. CONCLUSIONS An intervention designed to improve real-time feedback in the ED resulted in higher resident satisfaction with feedback received, but did not affect faculty satisfaction with the feedback given.


Academic Emergency Medicine | 2012

A suggested core content for education scholarship fellowships in emergency medicine

Lalena M. Yarris; Wendy C. Coates; Michelle Lin; Karen Lind; Jaime Jordan; Samuel Clarke; Todd Guth; Sally A. Santen; Stanley J. Hamstra

A working group at the 2012 Academic Emergency Medicine consensus conference on education research in emergency medicine (EM) convened to develop a curriculum for dedicated postgraduate fellowships in EM education scholarship. This fellowship is intended to create future education scholars, equipped with the skills to thrive in academic careers. This proceedings article reports on the consensus of a breakout session subgroup tasked with defining a common core content for education scholarship fellowships. The authors propose that the core content of an EM education scholarship fellowship can be categorized in four distinct areas: career development, theories of learning and teaching methods, education research methods, and educational program administration. This core content can be incorporated into curricula for education scholarship fellowships in EM or other fields and can also be adapted for use in general medical education fellowships.


Journal of Graduate Medical Education | 2016

Approved Instructional Resources Series: A National Initiative to Identify Quality Emergency Medicine Blog and Podcast Content for Resident Education.

Michelle Lin; Nikita Joshi; Andrew Grock; Anand Swaminathan; Eric J. Morley; Jeremy B. Branzetti; Taku Taira; Felix Ankel; Lalena M. Yarris

Background Emergency medicine (EM) residency programs can provide up to 20% of their planned didactic experiences asynchronously through the Individualized Interactive Instruction (III) initiative. Although blogs and podcasts provide potential material for III content, programs often struggle with identifying quality online content. Objective To develop and implement a process to curate quality EM content on blogs and podcasts for resident education and III credit. Methods We developed the Approved Instructional Resources (AIR) Series on the Academic Life in Emergency Medicine website. Monthly, an editorial board identifies, peer reviews, and writes assessment questions for high-quality blog/podcast content. Eight educators rate each post using a standardized scoring instrument. Posts scoring ≥ 30 of 35 points are awarded an AIR badge and featured in the series. Enrolled residents can complete an assessment quiz for III credit. After 12 months of implementation, we report on program feasibility, enrollment rate, web analytics, and resident satisfaction scores. Results As of June 2015, 65 EM residency programs are enrolled in the AIR Series, and 2140 AIR quizzes have been completed. A total of 96% (2064 of 2140) of participants agree or strongly agree that the activity would improve their clinical competency, 98% (2098 of 2140) plan to use the AIR Series for III credit, and 97% (2077 of 2140) plan to use it again in the future. Conclusions The AIR Series is a national asynchronous EM curriculum featuring quality blogs and podcasts. It uses a national expert panel and novel scoring instrument to peer review web-based educational resources.


Journal of Graduate Medical Education | 2015

Feedback: Cultivating a Positive Culture.

Aaron Kraut; Lalena M. Yarris; Joan Sargeant

Feedback has long been recognized as the “cornerstone of effective clinical teaching.”1 Recently, we have seen emphasis shift from an instructor-centric paradigm to a learner-centric model that aims to understand how learners seek, receive, and incorporate feedback. These are crucial first steps in improving feedback effectiveness. Rather than continuing to focus on feedback delivery methods, recent publications highlight the importance of the learners perspective in the feedback conversation through nurturing the skill of “reflection-in-action” and promoting a culture of “informed self-assessment.”2,3 This paradigm shift represents a welcome change, as a focus on learner-dependent variables better aligns with what really matters in the feedback conversation—improving learner performance. To ultimately improve performance, we must better understand what causes the feedback magic to occur: Which conditions of the learning environment spark recipient engagement, reflection, and motivation to change behavior?4


Academic Emergency Medicine | 2012

Creating Educational Leaders: Experiences with Two Education Fellowships in Emergency Medicine

Lalena M. Yarris; Wendy C. Coates

Academic physicians aiming to build careers on the scholarship of teaching require specific career development opportunities designed to provide the skills necessary for successful advancement and promotion as clinician-educators and scholars. Completing this training prior to embarking on an academic career may facilitate a smooth transition to a faculty position and establish mentoring networks and research collaboratives. This article describes two pilot medical education fellowships that have been successfully implemented in separate and unique departments of emergency medicine (EM). By comparing and contrasting the curricula and incorporating the experiences of graduating 10 EM education fellows over the past decade, the authors propose a fellowship structure that may be adapted to meet the needs of medical educators in a broad variety of fields and disciplines.

Collaboration


Dive into the Lalena M. Yarris's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jaime Jordan

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Samuel Clarke

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michelle Lin

University of California

View shared research outputs
Top Co-Authors

Avatar

Deborah Simpson

Medical College of Wisconsin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge