Nicole M. Dubosh
Beth Israel Deaconess Medical Center
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Nicole M. Dubosh.
Stroke | 2016
Nicole M. Dubosh; M. Fernanda Bellolio; Alejandro A. Rabinstein; Jonathan A. Edlow
Background and Purpose— Emerging evidence demonstrating the high sensitivity of early brain computed tomography (CT) brings into question the necessity of always performing lumbar puncture after a negative CT in the diagnosis of spontaneous subarachnoid hemorrhage (SAH). Our objective was to determine the sensitivity of brain CT using modern scanners (16-slice technology or greater) when performed within 6 hours of headache onset to exclude SAH in neurologically intact patients. Methods— After conducting a comprehensive literature search using Ovid MEDLINE, Ovid EMBASE, Web of Science, and Scopus, we conducted a meta-analysis. We included original research studies of adults presenting with a history concerning for spontaneous SAH and who had noncontrast brain CT scan using a modern generation multidetector CT scanner within 6 hours of symptom onset. Our study adheres to the preferred reporting items for systematic reviews and meta-analyses (PRISMA). Results— A total of 882 titles were reviewed and 5 articles met inclusion criteria, including an estimated 8907 patients. Thirteen had a missed SAH (incidence 1.46 per 1000) on brain CTs within 6 hours. Overall sensitivity of the CT was 0.987 (95% confidence intervals, 0.971–0.994) and specificity was 0.999 (95% confidence intervals, 0.993–1.0). The pooled likelihood ratio of a negative CT was 0.010 (95% confidence intervals, 0.003–0.034). Conclusions— In patients presenting with thunderclap headache and normal neurological examination, normal brain CT within 6 hours of headache is extremely sensitive in ruling out aneurysmal SAH.
Journal of Emergency Medicine | 2014
Nicole M. Dubosh; Dylan Carney; Jonathan Fisher; Carrie Tibbles
BACKGROUND Transitions of care are ubiquitous in the emergency department (ED) and inevitably introduce the opportunity for errors. Few emergency medicine residency programs provide formal training or a standard process for patient handoffs. Checklists have been shown to be effective quality-improvement measures in inpatient settings and may be a feasible method to improve ED handoffs. OBJECTIVE To determine if the use of a sign-out checklist improves the accuracy and efficiency of resident sign-out in the ED. METHODS A prospective pre-/postinterventional study of residents rotating in the ED at a tertiary academic medical center. Trained research assistants observed resident sign-out during shift change over a 2-week period and completed a data collection tool to indicate whether or not key components of sign-out occurred and time to sign out each patient. An electronic sign-out checklist was implemented using a multi-faceted educational effort. A 2-week postintervention observation phase was conducted. Proportions, means, and nonparametric comparison tests were calculated using STATA. RESULTS One hundred fifteen sign-outs were observed prior to checklist implementation and 114 were observed after. Significant improvements were seen in four sign-out components: reporting of history of present illness increased from 81% to 99%, ED course increased from 75% to 86%, likely diagnosis increased from 60% to 77%, and team awareness of plan increased from 21% to 41%. Use of the repeat-back technique decreased from 13% to 5% after checklist implementation and time to sign-out showed no significant change. CONCLUSION Implementation of a checklist improved the transfer of information without increasing time to sign-out.
Journal of Emergency Medicine | 2014
Timothy C. Peck; Nicole M. Dubosh; Carlo L. Rosen; Carrie Tibbles; Jennifer V. Pope; Jonathan Fisher
BACKGROUND The Accreditation Council for Graduate Medical Educations Next Accreditation System endorsed specialty-specific milestones as the foundation of an outcomes-based resident evaluation process. These milestones represent five competency levels (entry level to expert), and graduating residents will be expected to meet Level 4 on all 23 milestones. Limited validation data on these milestones exist. It is unclear if higher levels represent true competencies of practicing emergency medicine (EM) attendings. OBJECTIVE Our aim was to examine how practicing EM attendings in academic and community settings self-evaluate on the new EM milestones. METHODS An electronic self-evaluation survey outlining 9 of the 23 EM milestones was sent to a sample of practicing EM attendings in academic and community settings. Attendings were asked to identify which level was appropriate for them. RESULTS Seventy-nine attendings were surveyed, with an 89% response rate. Sixty-one percent were academic. Twenty-three percent (95% confidence interval [CI] 20%-27%) of all responses were Levels 1, 2, or 3; 38% (95% CI 34%-42%) were Level 4; and 39% (95% CI 35%-43%) were Level 5. Seventy-seven percent of attendings found themselves to be Level 4 or 5 in eight of nine milestones. Only 47% found themselves to be Level 4 or 5 in ultrasound skills (p = 0.0001). CONCLUSIONS Although a majority of EM attendings reported meeting Level 4 milestones, many felt they did not meet Level 4 criteria. Attendings report less perceived competence in ultrasound skills than other milestones. It is unclear if self-assessments reflect the true competency of practicing attendings. The study design can be useful to define the accuracy, precision, and validity of milestones for any medical field.
Diagnosis | 2015
Nicole M. Dubosh; Jonathan A. Edlow; Micah Lefton; Jennifer V. Pope
Abstract Background: Neurological emergencies often pose diagnostic challenges for emergency physicians because these patients often present with atypical symptoms and standard imaging tests are imperfect. Misdiagnosis occurs due to a variety of errors. These can be classified as knowledge gaps, cognitive errors, and systems-based errors. The goal of this study was to describe these errors through review of quality assurance (QA) records. Methods: This was a retrospective pilot study of patients with neurological emergency diagnoses that were missed or delayed at one urban, tertiary academic emergency department. Cases meeting inclusion criteria were identified through review of QA records. Three emergency physicians independently reviewed each case and determined the type of error that led to the misdiagnosis. Proportions, confidence intervals, and a reliability coefficient were calculated. Results: During the study period, 1168 cases were reviewed. Forty-two cases were found to include a neurological misdiagnosis and twenty-nine were determined to be the result of an error. The distribution of error types was as follows: knowledge gap 45.2% (95% CI 29.2, 62.2), cognitive error 29.0% (95% CI 15.9, 46.8), and systems-based error 25.8% (95% CI 13.5, 43.5). Cerebellar strokes were the most common type of stroke misdiagnosed, accounting for 27.3% of missed strokes. Conclusions: All three error types contributed to the misdiagnosis of neurological emergencies. Misdiagnosis of cerebellar lesions and erroneous radiology resident interpretations of neuroimaging were the most common mistakes. Understanding the types of errors may enable emergency physicians to develop possible solutions and avoid them in the future.
Academic Emergency Medicine | 2011
Aaron W. Bernard; Nicole M. Dubosh; Michael O’Connell; Justin Adkins; Sorabh Khandelwal; Brian Hiestand
OBJECTIVES Increasing the size of medical school classes has resulted in the use of community hospitals for emergency medicine (EM) clerkships. While differences in clinical experience are expected, it is unclear if they are significant. The authors set out to investigate whether or not clinical site affects student performance on a standard written exam as a measure of medical knowledge. METHODS This was a retrospective analysis of data from 2005 to 2009 for a mandatory fourth-year EM clerkship at one institution that uses academic (EM residency), hybrid (residency training site but not EM), and community (no residency programs) hospitals as clerkship sites. Multiple variable linear regression was used to examine the relationship between clerkship site and end of clerkship written exam score. Additional covariates included were the time of year the rotation was completed (by 3- or 4-month tertiles) and whether the student matched in EM. As test scores increased over the study period, a time factor was also included to account for this trend. A p-value of <0.05 was required for variable retention in the model. RESULTS A total of 718 students completed the clerkship and had complete data for analysis. Thirty-five students matched in EM. A total of 311 rotated at academic sites, 304 at hybrid sites, and 103 at community sites. After adjusting for covariates, clinical site was not a significant predictor of exam score (F(2,691) = 0.42, p = 0.65). Factors associated with higher test score were student match in EM (beta coefficient = 3.4, 95% confidence interval [CI] = 1.0 to 5.7) and rotation in July through September (beta coefficient = 1.8, 95% CI = 0.5 to 3.0, against a reference of January through April). No significant interaction terms or confounders were identified. CONCLUSIONS This study found no evidence that clerkship site affected final exam score. Academic EM clerkships may consider partnering with other hospitals for clinical experiences without compromising education.
Western Journal of Emergency Medicine | 2018
Julianna Jung; Douglas Franzen; Luan Lawson; David E. Manthey; Matthew Tews; Nicole M. Dubosh; Jonathan Fisher; Marianne Haughey; Joseph B. House; Arleigh Trainor; David A. Wald; Katherine M. Hiller
Introduction Clinical assessment of medical students in emergency medicine (EM) clerkships is a highly variable process that presents unique challenges and opportunities. Currently, clerkship directors use institution-specific tools with unproven validity and reliability that may or may not address competencies valued most highly in the EM setting. Standardization of assessment practices and development of a common, valid, specialty-specific tool would benefit EM educators and students. Methods A two-day national consensus conference was held in March 2016 in the Clerkship Directors in Emergency Medicine (CDEM) track at the Council of Residency Directors in Emergency Medicine (CORD) Academic Assembly in Nashville, TN. The goal of this conference was to standardize assessment practices and to create a national clinical assessment tool for use in EM clerkships across the country. Conference leaders synthesized the literature, articulated major themes and questions pertinent to clinical assessment of students in EM, clarified the issues, and outlined the consensus-building process prior to consensus-building activities. Results The first day of the conference was dedicated to developing consensus on these key themes in clinical assessment. The second day of the conference was dedicated to discussing and voting on proposed domains to be included in the national clinical assessment tool. A modified Delphi process was initiated after the conference to reconcile questions and items that did not reach an a priori level of consensus. Conclusion The final tool, the National Clinical Assessment Tool for Medical Students in Emergency Medicine (NCAT-EM) is presented here.
Clinical Practice and Cases in Emergency Medicine | 2017
Andrey Moyko; Nissa J. Ali; Nicole M. Dubosh; Matthew L. Wong
Epiglottitis is an uncommon but life-threatening disease. While the most common infectious causes are the typical respiratory pathogens, Pasteurella multocida is a rare causative organism. We present a case of P. multocida epiglottitis diagnosed by blood culture. The patient required intubation but was successfully treated medically. P. multocida is a rare cause of epiglottitis; this is the ninth reported case in the literature. Most diagnoses are made from blood culture and patients usually have an exposure to animals.
Western Journal of Emergency Medicine | 2017
Katherine M. Hiller; Douglas Franzen; Luan Lawson; David E. Manthey; Jonathan Fisher; Marianne Haughey; Matthew Tews; Nicole M. Dubosh; Joseph B. House; Arleigh Trainor; David A. Wald; Julianna Jung
N/A This submission is intended to be a brief educational advance for the CORD/CDEM supplement (there was no drop-down option), which should not require an abstract
Western Journal of Emergency Medicine | 2017
Jason Lewis; Nicole M. Dubosh; Carlo L. Rosen; David Schoenfeld; Jonathan Fisher; Edward Ullman
Introduction The structure of the interview day affects applicant interactions with faculty and residents, which can influence the applicant’s rank list decision. We aimed to determine if there was a difference in matched residents between those interviewing on a day on which didactics were held and had increased resident and faculty presence (didactic day) versus an interview day with less availability for applicant interactions with residents and faculty (non-didactic day). Methods This was a retrospective study reviewing interview dates of matched residents from 2009–2015. Results Forty-two (61.8%) matched residents interviewed on a didactic day with increased faculty and resident presence versus 26 (38.2%) on a non-didactic interview day with less availability for applicant interactions (p = 0.04). Conclusion There is an association between interviewing on a didactic day with increased faculty and resident presence and matching in our program.
Journal of Emergency Medicine | 2017
Oren J. Mechanic; Nicole M. Dubosh; Carlo L. Rosen; Alden Landry
BACKGROUND The Emergency Department is widely regarded as the epicenter of medical care for diverse and largely disparate types of patients. Physicians must be aware of the cultural diversity of their patient population to appropriately address their medical needs. A better understanding of residency preparedness in cultural competency can lead to better training opportunities and patient care. OBJECTIVE The objective of this study was to assess residency and faculty exposure to formal cultural competency programs and assess future needs for diversity education. METHODS A short survey was sent to all 168 Accreditation Council for Graduate Medical Education program directors through the Council of Emergency Medicine Residency Directors listserv. The survey included drop-down options in addition to open-ended input. Descriptive and bivariate analyses were used to analyze data. RESULTS The response rate was 43.5% (73/168). Of the 68.5% (50/73) of residency programs that include cultural competency education, 90% (45/50) utilized structured didactics. Of these programs, 86.0% (43/50) included race and ethnicity education, whereas only 40.0% (20/50) included education on patients with limited English proficiency. Resident comfort with cultural competency was unmeasured by most programs (83.6%: 61/73). Of all respondents, 93.2% (68/73) were interested in a universal open-source cultural competency curriculum. CONCLUSIONS The majority of the programs in our sample have formal resident didactics on cultural competency. Some faculty members also receive cultural competency training. There are gaps, however, in types of cultural competency training, and many programs have expressed interest in a universal open-source tool to improve cultural competency for Emergency Medicine residents.