David J. Samson
Blue Cross Blue Shield Association
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by David J. Samson.
Annals of Internal Medicine | 2000
Jerome Seidenfeld; David J. Samson; Vic Hasselblad; Naomi Aronson; Peter C. Albertsen; Charles L. Bennett; Timothy J Wilt
Androgen ablation delays clinical progression and palliates symptoms of metastatic disease in men with advanced prostate cancer (1-4). The earliest method was orchiectomy, and diethylstilbestrol (DES) subsequently became the first reversible method (5-7). Newer alternatives include luteinizing hormone-releasing hormone (LHRH) agonists, such as leuprolide, goserelin, and buserelin (8-10), and nonsteroidal antiandrogens, such as flutamide, nilutamide, and bicalutamide (11-13). Cyproterone acetate is the only steroidal antiandrogen still used for primary hormonal therapy (14-16). Many randomized, controlled trials have compared two or more of these options for monotherapy in men with advanced prostate cancer. Additional trials have tested the efficacy of antiandrogens combined with orchiectomy or LHRH agonists, an approach that is often called combined or maximal androgen blockade. Previous meta-analyses have compared monotherapy with combined androgen blockade (17-19). To date, no systematic review or meta-analysis has evaluated the evidence on effectiveness of monotherapies. Systematic reviews offer structured analysis of results of primary investigations by using strategies to limit bias and random error. They efficiently integrate otherwise unmanageable amounts of information to support clinical decision making. When it is feasible, quantitative meta-analysis can increase power and precision and enhance estimates of treatment effects and exposure risks. Meta-analysis also allows evaluation of consistency of findings or exploration of differences in outcomes, according to predefined subpopulations or factors regarding study quality. As part of a comprehensive review of the evidence on the relative effectiveness and cost-effectiveness of methods of androgen suppression as primary treatment for advanced prostate cancer (20), we conducted a systematic review and meta-analysis of randomized, controlled trials that compared different monotherapies. We establish that DES is equivalent to orchiectomy as a comparator for treatments of advanced prostate cancer and summarize our findings on four questions: 1) How effective is an LHRH agonist compared with orchiectomy or DES? 2) How effective is an antiandrogen compared with orchiectomy, DES, or an LHRH agonist? 3) Do the LHRH agonists differ in effectiveness? and 4) Do the antiandrogens differ in effectiveness? Although we sought to compare the adverse effects and quality-of-life effects of these treatments, scant evidence was available. Methods Our review was prospectively designed to define study objectives, search strategy, study selection criteria and methods for determining study eligibility, data elements to be abstracted and methods for abstraction, and methods for assessment of study quality. Two independent reviewers completed each step in this protocol and resolved disagreements by consensus. Disagreements were infrequent and were usually resolved by reconciliation of an oversight. When survival rates were estimated from figures in publications, disagreements were always less than 5% of the measured value, and the consensus estimate was the midpoint. All efficacy studies were randomized, controlled trials. Reviewers assessed the study quality dimensions that have been shown to be sources of bias (21): adequacy of randomization method, use of blinding and adequacy of concealment of allocation, and documentation of withdrawals and whether results were analyzed in an intention-to-treat fashion. Except for blinding and intention-to-treat analysis, published reports usually provided insufficient information to permit valid assessments of these quality dimensions. Therefore, studies that blinded patients and investigators to group assignment and used an intention-to-treat analysis of overall survival or progression-related outcomes were classified as higher-quality studies for sensitivity analysis. Blinding was considered not applicable when orchiectomy was one of the study arms. Literature Search and Study Selection We searched the MEDLINE, Cancerlit, EMBASE, and Cochrane Library databases from 1966 to March 1998 and Current Contents through 24 August 1998 for all articles that included at least one of the following terms in their titles, abstracts, or keyword lists: leuprolide (Lupron, TAP Pharmaceuticals Inc., Deerfield, Illinois), goserelin (Zoladex, Zeneca Pharmaceuticals, Wilmington, Delaware), buserelin (Suprefact, Hoechst Marion Roussel, Kansas City, Missouri), flutamide (Eulexin, Schering Corp., Kenilworth, New Jersey), nilutamide (Anandron, Roussel-Uclaf Laboratory, Romainville, France, and Nilandron, Hoechst Marion Roussel), bicalutamide (Casodex, Zeneca Pharmaceuticals, Wilmington, Delaware), cyproterone acetate (Androcur, Schering Corp.), diethylstilbestrol (DES), and orchiectomy (castration or orchidectomy). Search results were limited to studies on humans indexed under the Medical Subject Heading prostatic neoplasms. Randomized, controlled trials were identified by using the search strategy of the United Kingdom Cochrane Center (22). A total of 1477 references were retrieved and checked against the Cochrane Controlled Trials Register, the Cochrane Collaboration CENTRAL register, and trials cited in two recent meta-analyses. No additional trials were identified. Our study selection criteria limited reports of efficacy outcomes to randomized, controlled trials that compared 1) monotherapy with an LHRH agonist and monotherapy with orchiectomy or DES or 2) monotherapy with an antiandrogen and monotherapy with orchiectomy, DES, or an LHRH agonist. To facilitate comparison of results across trials that used different controls, studies that directly compared orchiectomy with DES were also included. Randomized, controlled trials that compared only different doses of the same agent were excluded. For adverse events, phase II studies that reported withdrawals from therapy were included. All studies reporting on quality of life were included. The patient population of interest was men with advanced prostate cancer, including regional or disseminated metastases (stage D1 or D2 disease [any T, N1 to N3, M0 or any T, any N, M1]) and minimally advanced disease (stage C disease [T3 or T4, N0 or NX, M0]). We also looked for outcomes that were analyzed by such patient prognostic factors as tumor grade, extent of disease, and performance status. Outcomes of interest were overall cancer-specific and progression-free survival, time to treatment failure, adverse effects, and quality of life. Where available, data on patient preferences were included. Adverse Events We encountered well-described difficulties (23, 24) in capturing infrequent events from small trials and inconsistencies among trials in measuring and reporting adverse events. Summarized here is the most reliable index of serious adverse events: the rate of withdrawal from therapy. A summary of adverse events by category (for example, cardiovascular, endocrine) is included in the full evidence report (20). Meta-Analysis We used the general approach to meta-analysis of trials in prostate cancer described by Caubet and colleagues (17), with additional guidance from Whitehead and Whitehead (25). To combine evidence from studies with several different treatment arms, it was necessary to go beyond standard meta-analysis techniques (26). The solution to the problem entails defining variables that describe the possible interventions. The poor survival rates for metastatic prostate cancer have implied a large value for the hazard rate (rate of death across time). We made the same assumption that is used in standard meta-analysisthat is, we assumed that the effect measure (hazard ratio in this case) remains constant across studies. Because several different treatments are now available, we assumed that all of the hazard ratios among the various treatments remain constant. The model is a generalization of the random-effects model described by DerSimonian and Laird (27). It is essentially the same model used by EGRET (28), except that it is applied to continuous outcomes instead of dichotomous outcomes. The model is a generalization that includes both fixed-effects and random-effects terms. The fixed-effects terms are the individual study intercepts. The random-effects terms are the slopes for the treatment effects. Estimates of all variables, including the extra variation, are obtained by maximum likelihood. On the basis of the preceding assumptions, our objective was to estimate the hazard rate for each arm of each study or to estimate the proportional hazards term and its standard error. We obtained estimates from other statistics for studies that did not provide this information directly. Caubet and colleagues (17) suggested a technique for estimating the log-hazard ratio from the chi-square value of the log-rank test. Where Kaplan-Meier curves were given, it was usually possible to estimate individual hazards, as described in the comprehensive evidence review (20). To use this meta-analysis method, we constructed a table of hazard rates for each arm of each study. The meta-analysis was done with software developed at the Duke Clinical Research Institute, Durham, North Carolina. Sensitivity analyses were used to test for heterogeneity of methods (including the effect of including studies of lower methodologic quality), participants, and interventions. An initial analysis determined whether the results of orchiectomy and DES were comparable and whether it was valid to pool studies in which the control groups used either of these monotherapies. Separate analyses also compared the available monotherapies and categories of monotherapies. All meta-analysis results were reported as hazard ratios relative to orchiectomy. Data Synthesis Overview of the Evidence Base The literature search identified 24 controlled trials that, collectively, randomly assigned more than 6600 patients to treatment with different monotherapies for
Annals of Internal Medicine | 2005
Athina Tatsioni; Deborah A. Zarin; Naomi Aronson; David J. Samson; Carole Redding Flamm; Christopher H. Schmid; Joseph Lau
Diagnostic tests, broadly construed, consist of any method of gathering information that may change a clinicians belief about the probability that a patient has a particular condition. Diagnosis is not an end in itself; rather, the purpose of a diagnostic test is to guide patient management decisions and thus improve patient outcomes. Because they are pivotal to health care decision making, diagnostic tests should be evaluated as rigorously as therapeutic interventions. A cursory search of the literature for a diagnostic technology may reveal many articles dealing with various aspects of the test. But rarely do these include reports of trials to assess the outcomes of using the test to guide patient management. In the mid-1970s, several groups (1-4) developed a now widely adopted framework to evaluate diagnostic technologies by categorizing studies into 6 levels (5). This framework is hierarchal: Level 1 consists of studies that address technical feasibility, and level 6 consists of those that address societal impact. Table 1 summarizes the framework and the key questions addressed by studies in each level. Table 1. Hierarchy of Diagnostic Evaluation and the Number of Studies Available for Different Levels of Diagnostic Test in a Technology Assessment of Magnetic Resonance Spectroscopy for Brain Tumors Evidence-based Practice Centers (EPCs) have produced several evidence reports and technology assessments of diagnostic technologies (www.ahrq.gov/clinic/techix.htm). This article uses 3 reports produced by the EPCs to illustrate the challenges involved in evaluating diagnostic technologies. The first assessed the use of magnetic resonance spectroscopy (MRS) to evaluate and manage brain mass. It exemplifies the challenges of identifying relevant studies and assessing the methodologic quality of diagnostic accuracy studies (6). The second, a report on technologies to diagnose acute cardiac ischemia, illustrates the problem of synthesizing studies that assess tests in different patient populations and report different outcomes (7). In particular, this report highlights the challenges in quantitatively combining data on test accuracy. The third report, on positron emission tomography (PET) for diagnosing and managing Alzheimer disease and dementia, exemplifies the challenges of assessing the societal impact of a diagnostic test (8). Finally, we discuss the problem of publication bias, which may slant the conclusions of a systematic review and meta-analysis in a biased direction. Challenge: Identifying Relevant Published and Unpublished Studies A report that assessed the value of MRS to diagnose and manage patients with space-occupying brain tumors demonstrates that there are few higher-level diagnostic test studies (8). Table 1 shows the number of studies and patients available for systematic review at each of the 6 levels of evaluation. Among the 97 studies that met the inclusion criteria, 85 were level 1 studies that addressed technical feasibility and optimization. In contrast, only 8 level 2 studies evaluated the ability of MRS to differentiate tumors from nontumors, assign tumor grades, and detect intracranial cystic lesions or assessed the incremental value of MRS added to magnetic resonance imaging (MRI). These indications were sufficiently different that the studies could not be combined or compared. Three studies provided evidence that assessed impact on diagnostic thinking (level 3) or therapeutic choice (level 4). No studies assessed patient outcomes or societal impact (levels 5 and 6). The case of MRS for use in diagnosis and management of brain tumors illustrates a threshold problem in systematic review of diagnostic technologies: the availability of studies providing at least level 2 evidence (since diagnostic accuracy studies are the minimum level relevant to assessing the outcomes of using the test to guide patient management). Although direct evidence is preferred, robust diagnostic accuracy studies can be used to create a causal chain for linking these studies with evidence on treatment effectiveness, thereby allowing an estimate of the effect on outcomes. The example of PET for Alzheimer disease, described later in this article, shows how decision analysis models can quantify outcomes to be expected from use of a diagnostic technology to manage treatment. The reliability of a systematic review hinges on the completeness of information used in the assessment. Identifying all relevant data poses another challenge. The Hedges Team at McMaster University developed and tested special MEDLINE search strategies that retrieved up to 99% of scientifically strong studies of diagnostic tests (9). Although these search strategies are useful, they do not identify grey literature publications, which by their nature are not easily accessible. The Grey Literature Report is the first step in the initiative of New York Academy of Medicine (www.nyam.org/library/grey.shtml) to collect grey literature items, which may include theses, conference proceedings, technical specifications and standards, noncommercial translations, bibliographies, technical and commercial documentation, and official documents not published commercially (10). Diagnostic studies with poor test performance results that are not published may lead to exaggerated estimates of a tests true sensitivity and specificity in a systematic review. Because there are typically few studies in the categories of clinical impact, unpublished studies showing no benefit by the use of a diagnostic test have even greater potential to cause bias during a review of evidence. Of note, the problem of publication bias in randomized, controlled trials has been extensively studied, and several visual and statistical methods have been proposed to detect and correct for unpublished studies (11). Funnel plots, which assume symmetrical scattering of studies around a common estimate, are popular for assessing publication bias in randomized, controlled trials. However, the appearance of the shape of the funnel plot has been shown to depend on the choices of weight and metric (12). Without adequate empirical assessments, funnel plots are being used in systematic reviews of diagnostic tests. However, their use and interpretation should be viewed with caution. The validity of using a funnel plot to detect publication bias remains uncertain. Statistical models to detect and correct for publication bias of randomized trials also have limitations (13). One solution to the problem of publication bias is the mandatory registration of all clinical trials before patient enrollment; for therapeutic trials, considerable progress has already been made in this area. Such a clinical trials registry could readily apply to studies of the clinical outcomes of diagnostic tests (14). Challenge: Assessing Methodologic Quality Diagnostic test evaluations often have methodologic weaknesses (15-17). Of the 8 diagnostic accuracy studies of MRS, half had small sample sizes. Of the larger studies, all had limitations related to patient selection or potential for observer bias. Methodologic quality of a study has been defined as the extent to which all aspects of a studys design and conduct can be shown to protect against systematic bias, nonsystematic bias that may arise in poorly performed studies, and inferential error (18). Test accuracy studies often have important biases, which may result in unreliable estimates of the accuracy of a diagnostic test (19-22). Several proposals have been advanced to assess the quality of a study evaluating diagnostic accuracy (23-25). Partly because of the lack of a true reference standard, there is no consensus for a single approach to assessing study quality (26). The lack of consistent relationships between specific quality elements and the magnitude of outcomes complicates the task of assessing quality (27, 28). In addition, quality is assessed on the basis of reported information that does not necessarily reflect how the study was actually performed and analyzed. The Standards for Reporting of Diagnostic Accuracy (STARD) group recently published a 25-item checklist as a guide to improve the quality of reporting all aspects of a diagnostic study (29). The STARD checklist was not developed as a tool to assess the quality of diagnostic studies. However, many items in the checklist are included in a recently developed tool for quality assessment of diagnostic accuracy studies (the QUADAS tool). The QUADAS tool consists of 14 items that cover patient spectrum, reference standard, disease progression bias, verification and review bias, clinical review bias, incorporation bias, test execution, study withdrawals, and intermediate results (28, 30). Challenge: Assessing Applicability of Study Populations Studies beyond the level of technical feasibility must include both diseased and nondiseased individuals who reflect the use of the diagnostic technologies in actual clinical settings. Because of the need to understand the relationship between test sensitivity and specificity, a study that reports only sensitivity (that is, evaluation of the test only in a diseased population) or only specificity (that is, evaluation of the test only in a healthy population) results cannot be used for this evaluation. In this section, we base our discussion on the evidence report on evaluating diagnostic technologies for acute cardiac ischemia in the emergency department (7). When the spectrum of disease ranges widely within a diseased population, the interpretation of results in a diagnostic accuracy study may be affected if study participants possess only certain characteristics of the diseased population (15, 21). For example, patients in cardiac care units are more likely to have acute cardiac ischemia than patients in the emergency department. When patients with more severe illness are analyzed, the false-positive rate is reduced and sensitivity is overestimated. For example, biomar
Cochrane Database of Systematic Reviews | 1999
Brian P. Schmitt; Charles L. Bennett; Jerome Seidenfeld; David J. Samson; Timothy J Wilt
OBJECTIVESnThis systematic review assessed the effect of maximal androgen blockade (MAB) on survival when compared to castration (medical or surgical) alone for patients with advanced prostate cancer.nnnSEARCH STRATEGYnRandomized controlled trials were searched in general and specialized databases (MEDLINE, EMBASE, Cancerlit, Cochrane Library, VA Cochrane Prostate Disease register) and by reviewing bibliographies.nnnSELECTION CRITERIAnAll published randomized trials were eligible for inclusion provided they (1) randomized men with advanced prostate cancer to receive a non-steroidal anti-androgen (NSAA) medication in addition to castration (medical or surgical) or to castration alone, and (2) reported overall survival, progression-free survival, cancer-specific survival, and/or adverse events. Eligibility was assessed by two independent reviewers.nnnDATA COLLECTION AND ANALYSISnInformation on patients, interventions, and outcomes were extracted by two independent reviewers using a standardized form. The main outcome measure for comparing effectiveness was overall survival at 1, 2, and 5 years. Secondary outcome measures included progression-free survival and cancer-specific survival. The relationship of specific NSAA on outcome was evaluated. Additionally, the incidence of adverse effects was measured.nnnMAIN RESULTSnTwenty trials enrolling 6,320 patients were included. The pooled OR for overall survival was 1.03 (95% CI:0.85 to 1.25), 1.16 (95% CI:1.00 to 1.33), and 1.29 (95% CI:1.11 to 1.50) at 1, 2, and 5 years respectively. Overall survival was only significant at 5 years. The risk difference at 5 years was 0.048 (95% CI:0.02 to 0.077) and NNT at 5 years 20.8. Progression-free survival was improved only at 1 year follow-up (OR=1.38) and cancer-free survival was improved only at 5 years (OR=1.22). Adverse events occurred more frequently in those assigned to MAB and resulted in withdrawal in 10%. Quality of life was measured in only one study favored orchiectomy alone (less diarrhea and better emotional functioning in the first 6 months).nnnREVIEWERS CONCLUSIONSnMAB produces a modest overall and cancer-specific survival at 5 years but is associated with increased adverse events and reduced quality of life.
Academic Radiology | 2002
David J. Samson; Carole Redding Flamm; Etta D. Pisano; Naomi Aronson
RATIONALE AND OBJECTIVESnThe purpose of this systematic review was to assess the performance of fluorodeoxyglucose positron emission tomography (PET) in the differential diagnosis of benign from malignant lesions among patients with abnormal mammograms or a palpable breast mass and to examine the effects of PET findings on patient care and health outcomes.nnnMATERIALS AND METHODSnA search of the MEDLINE and CancerLit databases covered articles entered between January 1966 and March 2001. Thirteen articles met the selection criteria. Each article was assessed for study quality characteristics. Meta-analysis was performed with a random effects model and a summary receiver operating characteristic curve.nnnRESULTSnA point on the summary receiver operating characteristic curve was selected that reflected average performance, with an estimated sensitivity of 89% and a specificity of 80%. When the prevalence of malignancy is 50%, 40% of all patients would benefit by avoiding the harm of a biopsy with negative biopsy results. The risk of a false-negative result, leading to delayed diagnosis and treatment, is 5.5%. The negative predictive value is 87.9%; thus, the false-negative risk is 12.1%. For a patient with a negative PET scan, a 12% chance of missed or delayed diagnosis of breast cancer is probably too high to make it worth the 88% chance of avoiding biopsy of a benign lesion.nnnCONCLUSIONnThe evidence does not favor the use of fluorodeoxyglucose PET to help decide whether to perform biopsy. Available studies omit a critical segment of the biopsy population with indeterminate mammograms or nonpalpable masses, for which no conclusions can be reached.
The Journal of Urology | 2013
Linda A Bradley; Glenn E. Palomaki; Steven Gutman; David J. Samson; Naomi Aronson
PURPOSEnWe compared the effectiveness of PCA3 (prostate cancer antigen 3) and select comparators for improving initial or repeat biopsy decision making in men at risk for prostate cancer, or treatment choices in men with prostate cancer.nnnMATERIALS AND METHODSnMEDLINE®, EMBASE®, Cochrane Database and gray literature were searched from January 1990 through May 2012. Included studies were matched, and measured PCA3 and comparator(s) within a cohort. No matched analyses were possible. Differences in independent performance estimates between PCA3 and comparators were computed within studies. Studies were assessed for quality using QUADAS (Quality Assessment of Diagnostic Accuracy Studies) and for strength of evidence using GRADE (Grading of Recommendations Assessment, Development and Evaluation) criteria.nnnRESULTSnAmong 1,556 publications identified, 34 observational studies were analyzed (24 addressed diagnostic accuracy and 13 addressed treatment decisions). Most studies were conducted in opportunistic cohorts of men referred for procedures and were not designed to answer key questions. Two study biases (partial verification and sampling) were addressed by analyses, allowing some conclusions to be drawn. PCA3 was more discriminatory than total prostate specific antigen increases (eg at an observed 50% specificity, summary sensitivities were 77% and 57%, respectively). Analyses indicated that this finding holds for initial and repeat biopsies, and that the markers were independent predictors. For all other biopsy decision making comparisons and associated health outcomes, strength of evidence was insufficient. For treatment decision making, strength of evidence was insufficient for all outcomes and comparators.nnnCONCLUSIONSnPCA3 had a higher diagnostic accuracy than total prostate specific antigen increases, but strength of evidence was low (limited confidence in effect estimates). Strength of evidence was insufficient to conclude that PCA3 testing leads to improved health outcomes. For all other outcomes and comparators, strength of evidence was insufficient.
Journal of Hospital Medicine | 2013
Nilam J. Soni; David J. Samson; Jodi L Galaydick; Elbert S. Huang; Naomi Aronson; David Pitrak
BACKGROUNDnThe utility of procalcitonin to manage patients with infections is unclear. A systematic review of comparative studies using procalcitonin-guided antibiotic therapy in patients with infections was performed.nnnMETHODSnRandomized, controlled trials comparing procalcitonin-guided initiation, intensification, or discontinuation of antibiotic therapy to clinically guided therapy were included. Outcomes were antibiotic usage, morbidity, and mortality. MEDLINE, EMBASE, the Cochrane Database, National Institute for Clinical Excellence, the National Guideline Clearinghouse, and the Health Technology Assessment Programme were searched from January 1, 1990 to December 16, 2011.nnnRESULTSnEighteen randomized, controlled trials were included. Data were pooled into clinically similar patient populations. In adult intensive care unit (ICU) patients, procalcitonin-guided discontinuation of antibiotics reduced antibiotic duration by 2.05 days (95% confidence interval [CI]: -2.59 to -1.52) without increasing morbidity or mortality. In contrast, procalcitonin-guided intensification of antibiotics in adult ICU patients increased antibiotic usage and morbidity. In adult patients with respiratory tract infections, procalcitonin guidance significantly reduced antibiotic duration by 2.35 days (95% CI: -4.38 to -0.33), antibiotic prescription rate by 22% (95% CI: -41% to -4%), and total antibiotic exposure without affecting morbidity or mortality. A single, good quality study of neonates with suspected sepsis demonstrated reduced antibiotic duration by 22.4 hours (P = 0.012) and reduced the proportion of neonates on antibiotics for ≥ 72 hours by 27% (P = 0.002) with procalcitonin guidance.nnnCONCLUSIONnProcalcitonin guidance can safely reduce antibiotic usage when used to discontinue antibiotic therapy in adult ICU patients and when used to initiate or discontinue antibiotics in adult patients with respiratory tract infections.
American Journal of Infection Control | 2014
Susan Glick; David J. Samson; Elbert S. Huang; Naomi Aronson; Stephen G. Weber
BACKGROUNDnMethicillin-resistant Staphylococcus aureus (MRSA) is an important cause of health care-associated infections. Although the evidence in support of MRSA screening has been promising, a number of questions remain about the effectiveness of active surveillance.nnnMETHODSnWe searched the literature for studies that examined MRSA acquisition, MRSA infection, morbidity, mortality, harms of screening, and resource utilization when screening for MRSA carriage was compared with no screening or with targeted screening. Because of heterogeneity of the data and weaknesses in study design, meta-analysis was not performed. Strength of evidence (SOE) was determined using the system developed by the Grading of Recommendations Assessment, Development and Evaluation Working Group.nnnRESULTSnOne randomized controlled trial and 47 quasi-experimental studies met our inclusion criteria. We focused on the 14 studies that addressed health care-associated outcomes and that attempted to control for confounding and/or secular trends, because those studies had the potential to support causal inferences. With universal screening for MRSA carriage compared with no screening, 2 large quasi-experimental studies found reductions in health care-associated MRSA infection. The SOE for this finding is low. For each of the other screening strategies evaluated, this review found insufficient evidence to determine the comparative effectiveness of screening.nnnCONCLUSIONSnAlthough there is low SOE that universal screening of hospital patients decreases MRSA infection, there is insufficient evidence to determine the consequences of universal screening or the effectiveness of other screening strategies.
Evidence report/technology assessment (Summary) | 1999
Jerome Seidenfeld; David J. Samson; Naomi Aronson; Pc Albertson; Ahmed M. Bayoumi; Charles L. Bennett; Adalsteinn D. Brown; Alan M. Garber; M Gere; Vic Hasselblad; Timothy J Wilt; Kathleen M Ziegler
Chest | 2007
David J. Samson; Jerome Seidenfeld; George R. Simon; Andrew T. Turrisi; Claudia J Bonnell; Kathleen M Ziegler; Naomi Aronson
Archive | 2012
Nilam J. Soni; David J. Samson; Jodi L Galaydick; Vats; David Pitrak; Naomi Aronson