Edward S. Wong
VCU Medical Center
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Edward S. Wong.
Infection Control and Hospital Epidemiology | 2007
Glenn Ridenour; Do Russell Lampen; Do Jeff Federspiel; Steve Kritchevsky; Edward S. Wong; Michael W. Climo
OBJECTIVE To determine whether the use of chlorhexidine bathing and intranasal mupirocin therapy among patients colonized with methicillin-resistant Staphylococcus aureus (MRSA) would decrease the incidence of MRSA colonization and infection among intensive care unit (ICU) patients. METHODS After a 9-month baseline period (January 13, 2003, through October 12, 2003) during which all incident cases of MRSA colonization or infection were identified through the use of active-surveillance cultures in a combined medical-coronary ICU, all patients colonized with MRSA were treated with intranasal mupirocin and underwent daily chlorhexidine bathing. RESULTS After the intervention, incident cases of MRSA colonization or infection decreased 52% (incidence density, 8.45 vs 4.05 cases per 1,000 patient-days; P=.048). All MRSA isolates remained susceptible to chlorhexidine; the overall rate of mupirocin resistance was low (4.4%) among isolates identified by surveillance cultures and did not increase during the intervention period. CONCLUSIONS We conclude that the selective use of intranasal mupirocin and daily chlorhexidine bathing for patients colonized with MRSA reduced the incidence of MRSA colonization and infection and contributed to reductions identified by active-surveillance cultures. This finding suggests that additional strategies to reduce the incidence of MRSA infection and colonization--beyond expanded surveillance--may be needed.
American Journal of Infection Control | 1995
Robert Orenstein; L. Reynolds; Mary Karabaic; Archer Lamb; Sheldon M. Markowitz; Edward S. Wong
OBJECTIVES To determine the effectiveness and direct of two protective devices-a shielded 3 ml safety syringe (Safety-Lok; Becton Dickinson and Co., Becton Dickinson Division, Franklin Lakes, N.J.) and the components of a needleless IV system (InterLink; Baxter Healthcare Corp., Deerfield, Ill.)--in preventing needlestick injuries to health care workers. DESIGN Twelve-month prospective, controlled, before-and-after trial with a standardized questionnaire to monitor needlestick injury rates. SETTING Six hospital inpatient units, consisting of three medical units, two surgical units (all of which were similar in patient census, acuity, and frequency of needlesticks), and a surgical-trauma intensive care unit, at a 900-bed urban university medical center. PARTICIPANTS All nursing personnel, including registered nurses, licensed practical nurses, nursing aides, and students, as well as medical teams consisting of an attending physician, resident physician, interns, and medical students on the study units. INTERVENTION After a 6-month prospective surveillance period, the protective devices were randomly introduced to four of the chosen study units and to the surgical-trauma intensive care unit. RESULTS Forty-seven needlesticks were reported throughout the entire study period, 33 in the 6 months before and 14 in the 6 months after the introduction of the protective devices. Nursing staff members who were using hollow-bore needles and manipulating intravenous lines accounted for the greatest number of needlestick injuries in the pre-intervention period. The overall rate of needlestick injury was reduced by 61%, from 0.785 to 0.303 needlestick injuries per 1000 health care worker-days after the introduction of the protective devices (relative risk = 1.958; 95% confidence interval, 1.012 to 3.790; p = 0.046). Needlestick injury rates associated with intravenous line manipulation, procedures with 3 ml syringes, and sharps disposal were reduced by 50%; however, reductions in these subcategories were not statistically significant. No seroconversions to HIV-1 or hepatitis B virus seropositivity occurred among those with needlestick injuries. The direct cost for each needlestick prevented was
Infection Control and Hospital Epidemiology | 2006
Glenn Ridenour; Edward S. Wong; Mark A. Call; Michael W. Climo
789. CONCLUSIONS Despite an overall reduction in needlestick injury rates, no statistically significant reductions could be directly attributed to the protective devices. These devices are associated with a significant increase in cost compared with conventional devices. Further studies must be concurrently controlled to establish the effectiveness of these devices.
Infection Control and Hospital Epidemiology | 2003
Barbara I. Braun; Stephen B. Kritchevsky; Edward S. Wong; Steve L. Solomon; Lynn Steele; Cheryl Richards; Bryan Simmons; Diane Baranowsky; Sue Barnett; Sandi Baus; Jacqueline Berry; Terri Bethea; Gregory Bond; Barbara Bor; Diann Boyette; Jacqueline P. Butler; Ruth Carrico; Janine Chapman; Gwen Cunningham; Mary Dahlmann; Elizabeth DeHaan; Mario Javier DeLuca; Richard J. Duma; LeAnn Ellingson; Jeffrey P. Engel; Pam Falk; W. Lee Fanning; Christine Filippone; Brenda Grant; Bonnie Greene
OBJECTIVES To determine the duration of methicillin-resistant Staphylococcus aureus (MRSA) colonization or infection before entry and during hospitalization in the intensive care unit (ICU) and the characteristics of patients who tested positive for MRSA. DESIGN Prospective observational cohort survey. SETTING A combined medical and coronary care ICU with 16 single-bed rooms in a 427-bed tertiary care Veteran Affairs Medical Center. PATIENTS A total of 720 ICU patients associated with 845 ICU admissions were followed up for the detection of MRSA from January 13, 2003, to October 12, 2003. MRSA colonization was detected in patients by using active surveillance cultures (ASCs) of nasal swab specimens obtained within 48 hours of ICU entry and 3 times weekly thereafter. The duration of colonization during ICU stay and before ICU entry was calculated after a review of surveillance culture results, clinical culture results, and medical history. RESULTS Ninety-three (11.0%) of 845 ICU admissions involved patients who were colonized with MRSA at the time of ICU entry, and 21 admissions (2.5%) involved patients who acquired MRSA during ICU stay. ASCs were positive for MRSA in 84 (73.6%) of the 114 admissions associated with MRSA positivity and were the sole means of identifying MRSA in 50 cases (43.8%). More than half of the MRSA-associated admissions involved patients who were transferred from hospital wards. The total bed-days of care for 38 admissions involving patients who tested positive for MRSA before ICU entry (1131 days) was nearly 20% higher than the total bed-days of care for all admissions associated with MRSA positivity (970 days). Admissions involving MRSA-positive patients were associated with a longer length of hospitalization before ICU entry (P < .001), longer length of ICU stay (P < .001), longer overall length of hospitalization (P < .001), and greater inpatient mortality than admissions involving MRSA-negative patients (P < .001). A total of 22.8% of all bed-care days were dedicated to MRSA-positive patients in the ICU, and 55 (48.2%) of 114 admissions associated with MRSA positivity involved patients who were colonized for the duration of their ICU stay. CONCLUSIONS In our unit, ASCs were an effective means to identify MRSA colonization among patients admitted to the ICU. Unfortunately, the majority of identified patients had long durations of stay in our own hospital before ICU entry, with prolonged MRSA colonization. Enhanced efforts to control MRSA will have to account for the prevalence of MRSA within hospital wards and to direct control efforts at these patients in the future.
Infection Control and Hospital Epidemiology | 1999
Edward S. Wong
OBJECTIVES To describe the conceptual framework and methodology of the Evaluation of Processes and Indicators in Infection Control (EPIC) study and present results of CVC insertion characteristics and organizational practices for preventing BSIs. The goal of the EPIC study was to evaluate relationships among processes of care, organizational characteristics, and the outcome of BSI. DESIGN This was a multicenter prospective observational study of variation in hospital practices related to preventing CVC-associated BSIs. Process of care information (eg, barrier use during insertions and experience of the inserting practitioner) was collected for a random sample of approximately 5 CVC insertions per month per hospital during November 1998 to December 1999. Organization demographic and practice information (eg, surveillance activities and staff and ICU nurse staffing levels) was also collected. SETTING Medical, surgical, or medical-surgical ICUs from 55 hospitals (41 U.S. and 14 international sites). PARTICIPANTS Process information was obtained for 3,320 CVC insertions with an average of 58.2 (+/- 16.1) insertions per hospital. Fifty-four hospitals provided policy and practice information. RESULTS Staff spent an average of 13 hours per week in study ICU surveillance. Most patients received nontunneled, multiple lumen CVCs, of which fewer than 25% were coated with antimicrobial material. Regarding barriers, most clinicians wore masks (81.5%) and gowns (76.8%); 58.1% used large drapes. Few hospitals (18.1%) used an intravenous team to manage ICU CVCs. CONCLUSIONS Substantial variation exists in CVC insertion practice and BSI prevention activities. Understanding which practices have the greatest impact on BSI rates can help hospitals better target improvement interventions.
Infection Control and Hospital Epidemiology | 2006
Barbara I. Braun; Stephen B. Kritchevsky; Linda Kusek; Edward S. Wong; Steven L. Solomon; Lynn Steele; Cheryl Richards; Robert P. Gaynes; Bryan P. Simmons
In this issue of Infection Control and Hospital Epidemiology, Kirkland et al report their study of the impact of surgical-site infections (SSIs) by the matched cohort method, comparing patients with SSIs to control patients of approximately the same age who have undergone the same operative procedure during the same time period.1 Controls also were matched for the same National Nosocomial Infection Surveillance (NNIS) Study index and even the same surgeon, if possible. The study found that SSIs prolonged hospital stay by a median of 6.5 days and incurred an attributable direct cost of
Infection Control and Hospital Epidemiology | 2000
Edward S. Wong
3,089. For infection control practitioners, these results are reassuring, because they serve to justify our professional existence; for healthcare administrators, they are equally reassuring, because they justify the resources and expenditures invested in infection control programs. However, the findings are hardly unique. Multiple prior studies dating from the 1950s through the 1980s have shown that SSIs are associated with excess hospital stay and excess cost.2-6 In these studies, the magnitude of the excess stays has varied from 1.3 to 23.8 days. The variations could be explained on the basis of the different surgical populations being evaluated, the different data-collection methods used, as well as the different criteria for matching infected patients to noninfected controls. As pointed out by Kirkland et al, the majority of those studies were done in the 1970s and early 1980s, prior to the onset of hospital reimbursement by diagnosis-related groups (DRGs). In the current era, in which DRG and managed-care rule, the lengths of hospital stay for most admissions have decreased. Due to the pressure to minimize hospital stays, one almost can predict a priori that the excess hospital stay attributable to SSIs would be lower in the current era, and indeed the median excess length of stay of 6.5 days found in Kirkland et al’s study is roughly half of what Green and Wenzel’s study found in 1976 prior to the advent of prospective reimbursement.6 There may be yet another explanation for the shorter excess stay attributable to SSIs. In the matched-cohort method, the object is to achieve as close a match as possible between patients who do and do not develop SSIs in order to minimize confounding variables that independently predispose to long hospitalization, as well as to wound infections. In prior studies, various criteria have been used for matching: age, operative procedure, admitting or discharge diagnoses, the number and severity of associated diseases, and nutritional status. In Kirkland et al’s study, control patients were matched by commonly used variables, including age, operative procedure, and operating surgeon. Several novel criteria also were used, such as the NNIS risk index and the American Society of Anesthesiologists’ score. These indices or scores are generally used to stratify operative patients in regard to their risk for subsequent infection.7 These variables were used in this study presumably as proxies for underlying severity of illness. While their use for this purpose is logical and reasonable, it remains unproven. Nonetheless, the set of criteria used by Kirkland et al for matching appear to be fairly stringent and would likely result in patient groups that are closely matched. As all epidemiologists know, the closer the match, the smaller the difference between groups. Thus, the excess length of stay may have diminished on this basis. The price you pay for stringent matching criteria is that a greater number of patients with SSIs (cases) might not be able to be matched with controls, which in turn could lead to selection biases. In most studies, depending on the matching criteria, 10% to 20% of infected patients are generally not matched to controls.2,4,6 I am pleasantly surprised that, in the study by Kirkland et al, only 17 (6%) of
American Journal of Infection Control | 1992
Edward S. Wong; Jl Stotka; Vm Chinchilli
OBJECTIVE Bloodstream infection (BSI) rates are used as comparative clinical performance indicators; however, variations in definitions and data-collection approaches make it difficult to compare and interpret rates. To determine the extent to which variation in indicator specifications affected infection rates and hospital performance rankings, we compared absolute rates and relative rankings of hospitals across 5 BSI indicators. DESIGN Multicenter observational study. BSI rate specifications varied by data source (clinical data, administrative data, or both), scope (hospital wide or intensive care unit specific), and inclusion/exclusion criteria. As appropriate, hospital-specific infection rates and rankings were calculated by processing data from each site according to 2-5 different specifications. SETTING A total of 28 hospitals participating in the EPIC study. PARTICIPANTS Hospitals submitted deidentified information about all patients with BSIs from January through September 1999. RESULTS Median BSI rates for 2 indicators based on intensive care unit surveillance data ranged from 2.23 to 2.91 BSIs per 1000 central-line days. In contrast, median rates for indicators based on administrative data varied from 0.046 to 7.03 BSIs per 100 patients. Hospital-specific rates and rankings varied substantially as different specifications were applied; the rates of 8 of 10 hospitals were both greater than and less than the mean. Correlations of hospital rankings among indicator pairs were generally low (rs=0-0.45), except when both indicators were based on intensive care unit surveillance (rs = 0.83). CONCLUSIONS Although BSI rates seem to be a logical indicator of clinical performance, the use of various indicator specifications can produce remarkably different judgments of absolute and relative performance for a given hospital. Recent national initiatives continue to mix methods for specifying BSI rates; this practice is likely to limit the usefulness of such information for comparing and improving performance.
American Journal of Infection Control | 1983
Edward S. Wong
In his article on the transmission of nosocomial infections, published in the Proceedings of the Third Decennial International Conference on Nosocomial Infections, sponsored by the Centers for Disease Control and Prevention (CDC) in 1991, Robert Weinstein placed hospital personnel in the center of the maelstrom of nosocomial transmission (Figure).1 He estimated that hospital personnel were responsible for approximately 20% to 40% of the nosocomial spread of pathogens from patient to patient through contact transmission. The conventional wisdom is that, during direct contact, the hands of personnel pick up exogenous organisms that are then deposited onto medical devices or wounds to cause infection. This mechanism of spread of infection is precisely what Semmelweis hypothesized when he observed the spread of puerperal sepsis. Since the Fourth Decennial Conference on Nosocomial Infections will be held shortly in Atlanta, Georgia, it is fair to ask whether we have learned anything new about contact transmission since the last decennial conference. How far beyond Semmelweis have we gone? Semmelweis correctly postulated that the hands carried a potentially deadly agent, but he did not know that it was a microbiological agent. We now know that our hands are an ecosystem comprised of permanent or resident flora and transient flora.2,3 The permanent florae consist of proprionibacteria, corynebacteria, micrococci and other staphylococcal species, including Staphylococcus epidermidis, Staphylococcus hominis, Staphylococcus capitis, all residing in stratum corneal layer, feeding on lipids and cellular debris. Permanent florae are viewed as “good” florae, as they rarely lead to disease; and, by their production of lipids and bacteriocins, permanent florae act to resist colonization by other microorganisms that potentially are more pathogenic. Transient florae do not normally reside on the skin, but are picked up during direct contact with patients or contaminated fomites. Unless they are eradicated through hand washing, they can be passed on or shed onto wounds, where, because of their pathogenic potential, they can lead to infection. What do we know about the epidemiology of hand contamination, since this is the critical first step in the process of cross-infection? Are certain patient-care activities more prone to result in hand contamination? Is there a critical inoculum that is more likely to lead to infection versus colonization? During the 1970s, the Fulkerson scale was developed and used to rank nursing activities according to their potential for hand contamination.4 Activities, ranked from 1 to 7, ranged from contact with cleaned or washed materials down to contact with objects in contact with patient secretions. These were considered “clean” activities. “Dirty” activities included contact with uninfected patient secretions, infected secretions or excretions, materials contaminated with secretions or excretions, or the infected sites themselves. The purpose of the Fulkerson scale was to identify nursing activities that require hand washing. The presumption was that dirtier activities led to higher rates of cross-infection, but this has never been shown. Recently, Pittet and coworkers published studies that add to and further refine our knowledge of hand contamination.5,6 Pittet and coworkers reconfirmed that the type of patient contact activity was important in determining the degree of hand contamination. In their study, respiratory care qualitatively resulted in a greater degree of hand contamination than handling of body fluid secretions (uninfected), which in turn posed a greater risk of contamination than skin contact. They also were able to demonstrate
JAMA | 1991
Edward S. Wong; Jennifer L. Stotka; Vernon M. Chinchilli; Denise S. Williams; C. Geri Stuart; Sheldon M. Markowitz
Using a daily questionnaire, we prospectively studied 277 physicians from two hospital medical services for incidents of exposure to blood and body fluids and barrier use before and after the implementation of universal precautions. We found that implementation significantly increased the frequency of barrier use during exposure incidents from 54% before implementation to 73% after implementation of universal precautions. Implementation led to a decrease in the number of exposure incidents that resulted in direct contact with blood and body fluids (actual exposures), from 5.07 to 2.66 exposures per physician per patient care month, and to an increase in averted exposures in which direct contact was prevented by the use of barrier devices, from 3.41 exposures per patient care month before implementation to 5.90 exposures per patient care month after implementation. Implementation affected neither the types of body fluid or procedures involved nor the overall rate of exposure incidents (8.5 per patient care month) but, through an increase in barrier use, it did prevent direct contact with blood and body fluids and thus converted what would have been an actual exposure into an averted one. We conclude that universal precautions were effective in reducing the risk of occupational exposures among physicians on a medical service.