Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Moshe Feldman is active.

Publication


Featured researches published by Moshe Feldman.


Journal of Continuing Education in The Health Professions | 2012

Judicious use of simulation technology in continuing medical education

Michael T. Curtis; Deborah DiazGranados; Moshe Feldman

&NA; Use of simulation‐based training is fast becoming a vital source of experiential learning in medical education. Although simulation is a common tool for undergraduate and graduate medical education curricula, the utilization of simulation in continuing medical education (CME) is still an area of growth. As more CME programs turn to simulation to address their training needs, it is important to highlight concepts of simulation technology that can help to optimize learning outcomes. This article discusses the role of fidelity in medical simulation. It provides support from a cross section of simulation training domains for determining the appropriate levels of fidelity, and it offers guidelines for creating an optimal balance of skill practice and realism for efficient training outcomes. After defining fidelity, 3 dimensions of fidelity, drawn from the human factors literature, are discussed in terms of their relevance to medical simulation. From this, research‐based guidelines are provided to inform CME providers regarding the use of simulation in CME training.


Global Journal of Health Science | 2014

Randomized Controlled Trials: A Systematic Review of Laparoscopic Surgery and Simulation-Based Training

Allison A. Vanderbilt; Amelia Grover; Nicholas J. Pastis; Moshe Feldman; Deborah Diaz Granados; Lydia Karuta Murithi; Arch G. Mainous

Introduction: This systematic review was conducted to analyze the impact and describe simulation-based training and the acquisition of laparoscopic surgery skills during medical school and residency programs. Methods: This systematic review focused on the published literature that used randomized controlled trials to examine the effectiveness of simulation-based training to develop laparoscopic surgery skills. Searching PubMed from the inception of the databases to May 1, 2014 and specific hand journal searches identified the studies. This current review of the literature addresses the question of whether laparoscopic simulation translates the acquisition of surgical skills to the operating room (OR). Results: This systematic review of simulation-based training and laparoscopic surgery found that specific skills could be translatable to the OR. Twenty-one studies reported learning outcomes measured in five behavioral categories: economy of movement (8 studies); suturing (3 studies); performance time (13 studies); error rates (7 studies), and global rating (7 studies). Conclusion: Simulation-based training can lead to demonstrable benefits of surgical skills in the OR environment. This review suggests that simulation-based training is an effective way to teach laparoscopic surgery skills, increase translation of laparoscopic surgery skills to the OR, and increase patient safety; however, more research should be conducted to determine if and how simulation can become apart of surgical curriculum.


Medical Education Online | 2013

Assessment in undergraduate medical education: a review of course exams

Allison A. Vanderbilt; Moshe Feldman; Issac K. Wood

Introduction : The purpose of this study is to describe an approach for evaluating assessments used in the first 2 years of medical school and report the results of applying this method to current first and second year medical student examinations. Methods : Three faculty members coded all exam questions administered during the first 2 years of medical school. The reviewers discussed and compared the coded exam questions. During the bi-monthly meetings, all differences in coding were resolved with consensus as the final criterion. We applied Moores framework to assist the review process and to align it with National Board of Medical Examiners (NBME) standards. Results : The first and second year medical school examinations had 0% of competence level questions. The majority, more than 50% of test questions, were at the NBME recall level. Conclusion : It is essential that multiple-choice questions (MCQs) test the attitudes, skills, knowledge, and competency in medical school. Based on our findings, it is evident that our exams need to be improved to better prepare our medical students for successful completion of NBME step exams.INTRODUCTION The purpose of this study is to describe an approach for evaluating assessments used in the first 2 years of medical school and report the results of applying this method to current first and second year medical student examinations. METHODS Three faculty members coded all exam questions administered during the first 2 years of medical school. The reviewers discussed and compared the coded exam questions. During the bi-monthly meetings, all differences in coding were resolved with consensus as the final criterion. We applied Moores framework to assist the review process and to align it with National Board of Medical Examiners (NBME) standards. RESULTS The first and second year medical school examinations had 0% of competence level questions. The majority, more than 50% of test questions, were at the NBME recall level. CONCLUSION It is essential that multiple-choice questions (MCQs) test the attitudes, skills, knowledge, and competency in medical school. Based on our findings, it is evident that our exams need to be improved to better prepare our medical students for successful completion of NBME step exams.


Medical Education | 2011

Theory is needed to improve education, assessment and policy in self-directed learning.

Paul E. Mazmanian; Moshe Feldman

Recent studies suggest that a theoretical model unique to selfdirected learning (SDL) would help in making sense of confusing or overlapping concepts often used to guide teaching, learning and policy in the health professions. For example, Hojat et al., originators of the Jefferson Scale of Physician Lifelong Learning (JeffSPLL), an instrument validated in use with practising doctors and with undergraduate medical students, indicate that lifelong learning and SDL share key concepts, including selfinitiated learning behaviours, information-seeking skills and the ability to recognise one’s own learning needs. Regulatory bodies in Canada and the USA already require evidence of lifelong learning and self-assessment for ongoing certification of doctor performance, and European agencies are working to resolve variations in systems of appraisal, assessment and continuing professional development to inform good medical practice and to enable the free circulation of doctors among countries of the European Union. Canada and the USA already require evidence of lifelong learning and selfassessment for ongoing certification of doctor performance


Infection Control and Hospital Epidemiology | 2017

Acceptability and Necessity of Training for Optimal Personal Protective Equipment Use.

Michelle Doll; Moshe Feldman; Sarah Hartigan; Kakotan Sanogo; Michael P. Stevens; Myriah McReynolds; Nadia Masroor; Kaila Cooper; Gonzalo Bearman

Healthcare workers routinely self-contaminate even when using personal protective equipment. Observations of donning/ doffing practices on inpatient units along with surveys were used to assess the need for a personal protective equipment training program. In contrast to low perceived risk, observed doffing behaviors demonstrate significant personal protective equipment technique deficits. Infect Control Hosp Epidemiol 2017;38:226-229.


Simulation in healthcare : journal of the Society for Simulation in Healthcare | 2015

An Internal Medicine Simulated Practical Examination for Assessment of Clinical Competency in Third-Year Medical Students.

Cheryl Bodamer; Moshe Feldman; Jeffrey Kushinka; Ellen Brock; Alan W. Dow; Jessica A. Evans; Gonzalo Bearman

Introduction Achieving standardized assessment of medical student competency in patient care is a challenge. Simulation may provide unique contributions to overall assessment. We developed an Internal Medicine Standardized Simulation-Based Examination (SSBE) for the third-year clerkship to assess students’ medical knowledge, diagnostic skills, and clinical management skills. We assessed convergent and test criterion validity by comparing the relationship of SSBE scores with United States Medical Licensing Examination step 2 clinical knowledge, shelf examination, eQuiz, objective structured clinical examination, ward evaluation scores, and overall clerkship grades. We hypothesize that the use of the SSBE will allow for a more reliable assessment of these competencies and add value to existing assessments. Methods A prospective study design was used. The SSBE consisted of a computer-based photo quiz and cases on high-fidelity simulators. Performance on the SSBE was compared with standardized examinations, clinical evaluations, and overall clerkship grades. Students completed an evaluation of the experience. Results Two hundred seven students completed the SSBE, with a mean (SD) score of 76.69 (7.78). The SSBE performance was positively related to other assessments of medical knowledge eQuiz scores (r203 = 0.33, P < 0.01), shelf examination scores (r158 = 0.53, P < 0.01), and clinical performance (ward scores) (r163 = 0.31, P < 0.01) but not to objective structured clinical examination scores. There was a positive relationship to final class grades (r163 = 0.45, P < 0.01), shelf examination (r158 = 0.52, P < 0.01) and step 2 clinical knowledge scores (r76 = 0.54, P < 0.01). Most students (93%) agreed that it was a fair examination. Conclusions Our results provide validity evidence for the SSBE as an additional assessment tool that uses a novel approach for evaluating competency in patient care at the clerkship level.


Academic Medicine | 2016

Training and Assessing Interprofessional Virtual Teams Using a Web-Based Case System.

Alan W. Dow; Peter A. Boling; Kelly S. Lockeman; Paul E. Mazmanian; Moshe Feldman; Deborah DiazGranados; Joel Browning; Antoinette B. Coe; Rachel Selby-Penczak; Sarah Hobgood; Linda J. Abbey; Pamela Parsons; Jeffrey C. Delafuente; Suzanne Fleming Taylor

Purpose Today, clinical care is often provided by interprofessional virtual teams—groups of practitioners who work asynchronously and use technology to communicate. Members of such teams must be competent in interprofessional practice and the use of information technology, two targets for health professions education reform. The authors created a Web-based case system to teach and assess these competencies in health professions students. Method They created a four-module, six-week geriatric learning experience using a Web-based case system. Health professions students were divided into interprofessional virtual teams. Team members received profession-specific information, entered a summary of this information into the case system’s electronic health record, answered knowledge questions about the case individually, then collaborated asynchronously to answer the same questions as a team. Individual and team knowledge scores and case activity measures—number of logins, message board posts/replies, views of message board posts—were tracked. Results During academic year 2012–2013, 80 teams composed of 522 students from medicine, nursing, pharmacy, and social work participated. Knowledge scores varied by profession and within professions. Team scores were higher than individual scores (P < .001). Students and teams with higher knowledge scores had higher case activity measures. Team score was most highly correlated with number of message board posts/replies and was not correlated with number of views of message board posts. Conclusions This Web-based case system provided a novel approach to teach and assess the competencies needed for virtual teams. This approach may be a valuable new tool for measuring competency in interprofessional practice.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2005

Demonstration: Advancing Robotics Research Through the Use of a Scale Mout Facility

A. William Evans; Raegan M. Hoeft; Sherri A. Rehfeld; Moshe Feldman; Michael T. Curtis; Thomas Fincannon; Jessica Ottlinger; Florian Jentsch

This demonstration serves as an introduction to the CARAT scale MOUT (Military Operation in Urban Terrain) facility developed at the Team Performance Laboratory (TPL) at the University of Central Florida (UCF). Advances in automated military vehicles require research to understand how best to allocate control of these vehicles. Whether, discussing uninhabited ground vehicles (UGVs) or air vehicles (UAVs), many questions still exist as to the optimum level of performance with respect to the ratio of human controls to vehicles. The scale MOUT facility at UCF allows researchers to investigate these issues without sacrificing large costly equipment and without requiring vast physical areas, within which to test such equipment. This demonstration provides an introduction to the scale MOUT facility, describes the basic need for this tool, presents its advantages over full size counterparts, as well as several other possible uses for the facility.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2012

Simulation-Based Training across the Medical Education Continuum

Megan E. Gregory; Lauren E. Benishek; Elizabeth H. Lazzara; Moshe Feldman; Michael A. Rosen; Shawna J. Perry

Simulation-based training (SBT) is commonly integrated into medical education. This panel examines current uses of SBT in undergraduate, graduate, and continuing medical education and addresses the advantages and different considerations for each. Furthermore, we consider how SBT within the different levels of medical education can inform practices across levels.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2011

Simulation in Healthcare One size fits all

Elizabeth H. Lazzara; Sallie J. Weaver; Matthew B. Weinger; Moshe Feldman; Michael A. Rosen; Kyle Harrison; F. Jacob Seagull

Simulation has been rapidly adopted within the medical community as evidenced by the fact that clinical care providers from all backgrounds (e.g., residents, physicians, nurses, anesthesiologists, ancillary staff, etc.) and all institutions (e.g., hospital, training centers, and medical schools) have incorporated simulation into their training and education curriculums. Although simulators are becoming a staple in clinical education, simulation is not a one-size-fits-all solution. Thus, the objective of the current panel is to combine the expertise of leading human factors and clinical care providers in the fields of learning, simulation, human performance, and human-system interaction to provide their insight and perspective on the following questions: What are the issues to consider when developing, implementing, and evaluating simulation-based training across a broad spectrum of training, education, and improvement applications in the healthcare domain? What are the contributions that human factors science and healthcare experts can combine to effectively develop, execute, and assess simulation-based training in hospitals, training centers, and medical schools?

Collaboration


Dive into the Moshe Feldman's collaboration.

Top Co-Authors

Avatar

Michael T. Curtis

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Alan W. Dow

Virginia Commonwealth University

View shared research outputs
Top Co-Authors

Avatar

Allison A. Vanderbilt

Virginia Commonwealth University

View shared research outputs
Top Co-Authors

Avatar

Deborah DiazGranados

Virginia Commonwealth University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gonzalo Bearman

Virginia Commonwealth University

View shared research outputs
Top Co-Authors

Avatar

Kelly S. Lockeman

Virginia Commonwealth University

View shared research outputs
Top Co-Authors

Avatar

Paul E. Mazmanian

Virginia Commonwealth University

View shared research outputs
Top Co-Authors

Avatar

Raegan M. Hoeft

University of Central Florida

View shared research outputs
Researchain Logo
Decentralizing Knowledge