Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lisa M. Schwartz is active.

Publication


Featured researches published by Lisa M. Schwartz.


Annals of Internal Medicine | 2009

Press Releases by Academic Medical Centers: Not So Academic?

Steven Woloshin; Lisa M. Schwartz; Samuel L. Casella; Abigail T. Kennedy; Robin J. Larson

Context News reports often exaggerate the importance of medical research. Contribution The researchers reviewed press releases issued by academic medical centers. They found that many press releases overstated the importance of study findings while underemphasizing cautions that limited the findings clinical relevance. Caution The researchers did not attempt to see how the press releases influenced actual news stories. Implication Academic center press releases often promote research with uncertain clinical relevance without emphasizing important cautions or limitations. The Editors Medical journalism is often criticized for what reporters cover (for example, preliminary work) and how they cover it (for example, turning modest findings into miracles) (14). Critics often place blame squarely on the media, pointing out that few journalists are trained to critically read medical research or suggesting that sensationalism is deliberate: Whereas scientists want to promote the truth, the media just want to sell newspapers. But exaggeration may begin with the journalists sources. Researchers and their funders, and even medical journals, often court media attention through press releases. The strategy works: Press releases increase the chance of getting media coverage (5, 6) and shape subsequent reporting (7). An independent medical news rating organization found that more than one third of U.S. health news stories seemed to rely solely or largely on press releases (1). Academic medical centers produce large volumes of research and attract press coverage through press releases. Because these centers set the standard for research and education in U.S. medicine, one might assume that their press releases are measured and unexaggerated. To test this assumption, we examined press releases from academic medical centers in a systematic manner. Methods We selected the 10 highest-ranked and 10 lowest-ranked of the academic medical centers covered in U.S. News & World Reports medical school research rankings (8) that issued at least 10 releases in 2005. In addition, we identified each medical schools affiliates by using an Association of American Medical Colleges database. The Appendix Table lists the centers and their affiliated press offices. We initially intended to compare press releases by research ranking, but because we found few differences, we report findings across the entire study sample, highlighting the few differences by rank where they exist. Appendix Table. Highest- and Lower-Ranked Medical Schools (and Their Affiliated Press Offices) for Research That Issued at Least 10 Press Releases in 2005 Press Release Process During 2006, a former medical school press officer conducted semistructured (15-minute) telephone interviews with the person in charge of media relations at the 20 centers. The interview script (Appendix) covered release policy (how is research chosen?), production (writing, review, researchers role), and an overall assessment (perceived pressure for media results, praise, or backlash). Supplement. Appendix Press Release Content We searched Eureka Alert (a press release database) for all medical and health releases issued by the 20 centers and their affiliates in 2005. The Figure summarizes the search results. Figure. Study flow diagram. *Of the medical schools that issued at least 10 press releases in 2005. Science Promoted After excluding duplicate or nonresearch releases (such as those announcing grants), we determined study focus (animal or human) and publication status; if the study was published, we characterized the journals academic prominence by using the Thompson Scientific Journal Citation Reports impact factor. Content Analysis We randomly selected 200 press releases (10 per center) and assessed presentation of study facts, cautions, and presence of exaggeration by using separate coding schemes for human and nonhuman studies (the Appendix includes both schemes). The schemes included 32 unique items (10 for human studies only, 4 for nonhuman studies only, and 18 common to both). Sixteen items involved simply extracting facts from the release (for example, study size); the other 16 items required subjective judgments (for example, were there cautions about confounding for observational studies?). To confirm key study details (such as population, design, and size), we obtained the research reports (journal article or meeting abstract) referenced in the releases. Coding Reliability and Analysis Two research assistants who were blinded to the studys purpose independently coded releases. To measure reliability, the coders and investigators reviewed each codes definition and then reread the release to confirm (or change) their code. Errors due to definition or data entry problems were corrected before agreement was calculated. Intercoder agreement was nearly perfect (9) for both sets of items: for factual items, was 1.0 (range, 0.98 to 1.0), and for subjective items, was 0.97 (range, 0.79 to 1.0). Disagreements were resolved by 4 of the investigators. We used STATA, version 10 (StataCorp, College Station, Texas) for all analyses. Role of the Funding Source The project was funded by the National Cancer Institute and the Robert Wood Johnson Generalist Faculty Scholars Program. Neither source had any role in study design, conduct, or analysis or in the decision to seek publication. Results Press Release Process All centers said that investigators routinely request press releases and are regularly involved in editing and approving them (Table 1). Only 2 centers routinely involve independent reviewers. On average, centers employed 5 press release writers (the highest-ranked centers had more writers than lower-ranked centers [mean, 6.6 vs. 3.7]). Three centers said that they trained writers in research methods and results presentation, but most expected writers to already have these skills and hone them on the job. All 20 centers said that media coverage is an important measure of their success, and most report the number of media hits garnered to the administration. Table 1. Press Release Process and Press Releases Issued by the 20 Academic Medical Centers Press Releases Issued Table 1 shows that the centers issued 989 medical research-related releases in 2005. The centers averaged 49 releases per year; the range was 13 (Brown Medical School) to 186 (Johns Hopkins University School of Medicine). Twelve percent of the releases promoted unpublished research from scientific meetings. Higher-ranked centers issued more releases than lower-ranked centers (743 vs. 246) and were less likely to promote unpublished research (9% vs. 20%). Press Release Quality Table 2 summarizes the measures of press release quality. Table 2. Type of Research Promoted in and Quality of the 200 Press Releases Analyzed in Detail Study Details and Cautions Of the 95 releases about primary human research (excluding unstructured reviews and decision models), 77% provided study size and most (66%) quantified the main finding in some way; 47% used at least 1 absolute number, the most transparent way to represent results (10, 11). Few releases (12%) provided access to the full scientific report. Two thirds of the 200 randomly selected releases reported study funding sources; 4% noted conflicts of interest (either that none [3 releases] or some existed [4 releases]). Of all 113 releases about human studies, 17% promoted published studies with the strongest designs (randomized trials or meta-analyses). Forty percent reported on inherently limited studies (for example, sample size <30, uncontrolled interventions, primary surrogate outcomes, or unpublished meeting reports). Fewer than half (42%) provided any relevant caveats. For example, a release titled Lung-sparing treatment for cancer proving effective (which concluded that treatment was a safe and effective way to treat early stage lung cancer in medically inoperable patients) lacked cautions about this uncontrolled study of 70 patients. Among the 87 releases about animal or laboratory studies, most (64 of 87) explicitly claimed relevance to human health, yet 90% lacked caveats about extrapolating results to people. For example, a release about a study of ultrasonography reducing tumors in mice, titled Researchers study the use of ultrasound for treatment of cancer, claimed (without caveats) that in the future, treatments with ultrasound either alone or with chemotherapeutic and antivascular agents could be used to treat cancers. Exaggeration Twenty-nine percent of releases (58 of 200) were rated as exaggerating the findings importance. Exaggeration was found more often in releases about animal studies than human studies (41% vs. 18%). Almost all releases (195 of 200) included investigator quotes, 26% of which were judged to overstate research importance. For example, a release for a study of mice with skin cancer, titled Scientists inhibit cancer gene. Potential therapy for up to 30 percent of human tumors, quoted the investigator as saying that the implication is that a drug therapy could be developed to reduce tumors caused by Ras without significant side effects. Coders thought that the implication exaggerated the study findings, because neither treatment efficacy nor tolerability in humans was assessed. Although 24% (47 of 200) of releases used the word significant, only 1 clearly distinguished statistical from clincial significance. All other cases were ambiguous, creating an opportunity for overinterpretation: for example, Not-for-profit hospitals consistently had significantly higher scores than for-profit hospitals. Discussion Press releases issued by 20 academic medical centers frequently promoted preliminary research or inherently limited human studies without providing basic details or cautions needed to judge the meaning, relevance, or validity of the science. Our findings are consistent with those of other analyses of pharmaceutical industry (12) and medical jo


JAMA Internal Medicine | 2014

Resource use and guideline concordance in evaluation of pulmonary nodules for cancer: too much and too little care.

Renda Soylemez Wiener; Michael K. Gould; Christopher G. Slatore; Benjamin G. Fincke; Lisa M. Schwartz; Steven Woloshin

IMPORTANCEnPulmonary nodules are common, and more will be found with implementation of lung cancer screening. How potentially malignant pulmonary nodules are evaluated may affect patient outcomes, health care costs, and effectiveness of lung cancer screening programs. Guidelines for evaluating pulmonary nodules for cancer exist, but little is known about how nodules are evaluated in the usual care setting.nnnOBJECTIVEnTo characterize nodule evaluation and concordance with guidelines.nnnDESIGN, SETTING, AND PARTICIPANTSnA retrospective cohort study was conducted including detailed review of medical records from pulmonary nodule detection through evaluation completion, cancer diagnosis, or study end (December 31, 2012). The participants included 300 adults with pulmonary nodules from 15 Veterans Affairs hospitals.nnnMAIN OUTCOMES AND MEASURESnResources used for evaluation at any Veterans Affairs facility and guideline-concordant evaluation served as the main outcomes.nnnRESULTSnTwenty-seven of 300 patients (9.0%) with pulmonary nodules ultimately received a diagnosis of lung cancer: 1 of 57 (1.8%) with a nodule of 4 mm or less, 4 of 134 (3.0%) with a nodule of 5 to 8 mm, and 22 of 109 (20.2%) with a nodule larger than 8 mm. Nodule evaluation entailed 1044 imaging studies, 147 consultations, 76 biopsies, 13 resections, and 21 hospitalizations. Radiographic surveillance (nu2009=u2009277) lasted a median of 13 months but ranged from less than 0.5 months to 8.5 years. Forty-six patients underwent invasive procedures (range per patient, 1-4): 41.3% (19 patients) did not have cancer and 17.4% (8) experienced complications, including 1 death. Notably, 15 of the 300 (5.0%) received no purposeful evaluation and had no obvious reason for deferral, seemingly falling through the cracks. Among 197 patients with a nodule detected after release of the Fleischner Society guidelines, 44.7% received care inconsistent with guidelines (17.8% overevaluation, 26.9% underevaluation). In multivariable analyses, the strongest predictor of guideline-inconsistent care was inappropriate radiologist recommendations (overevaluation relative risk, 4.6 [95% CI, 2.3-9.2]; underevaluation, 4.3 [2.7-6.8]). Other systems factors associated with underevaluation included receiving care at more than 1 facility (2.0 [1.5-2.7]) and nodule detection during an inpatient or preoperative visit (1.6 [1.1-2.5]).nnnCONCLUSIONS AND RELEVANCEnPulmonary nodule evaluation is often inconsistent with guidelines, including cases with no workup and others with prolonged surveillance or unneeded procedures that may cause harm. Systems to improve quality (eg, aligning radiologist recommendations with guidelines and facilitating communication across providers) are needed before lung cancer screening is widely implemented.


Inflammatory Bowel Diseases | 2010

When Should Ulcerative Colitis Patients Undergo Colectomy for Dysplasia? Mismatch Between Patient Preferences and Physician Recommendations

Corey A. Siegel; Lisa M. Schwartz; Steven Woloshin; Elisabeth B. Cole; David T. Rubin; Tegan Vay; Judith Baars; Bruce E. Sands

Background: If dysplasia is found on biopsies during surveillance colonoscopy for ulcerative colitis (UC), many experts recommend colectomy given the substantial risk of synchronous colon cancer. The objective was to learn if UC patients perceptions of their colon cancer risk and if their preferences for elective colectomy match with physicians recommendations if dysplasia was found. Methods: A self‐administered written survey included 199 patients with UC for at least 8 years (mean age 49 years, 52% female) who were recruited from Dartmouth‐Hitchcock (n = 104) and the University of Chicago (n = 95). The main outcome was the proportion of patients who disagree with physicians recommendations for colectomy because of dysplasia. Results: Almost all respondents recognized that UC raised their chance of getting colon cancer. In all, 74% thought it was “unlikely” or “very unlikely” to get colon cancer within the next 10 years and they quantified this risk to be 23%; 60% of patients would refuse a physicians recommendation for elective colectomy if dysplasia was detected, despite being told that they had a 20% risk of having cancer now. On average, these patients would only agree to colectomy if their risk of colon cancer “right now” were at least 73%. Conclusions: UC patients recognize their increased risk of colon cancer and undergo frequent surveillance to reduce their risk. Nonetheless, few seem prepared to follow standard recommendations for elective colectomy if dysplasia is found. This may reflect the belief that surveillance alone is sufficient to reduce their colon cancer risk or genuine disagreement about when it is worth undergoing colectomy. Inflamm Bowel Dis 2010


BMJ | 2012

How a charity oversells mammography

Steven Woloshin; Lisa M. Schwartz

In their occasional series highlighting the exaggerations, distortions, and selective reporting that make some news stories, advertising, and medical journal articles “not so,” Lisa M Schwartz and Steven Woloshin explain how a charity used misleading statistics to persuade women to undergo mammography


Annals of Internal Medicine | 2016

ClinicalTrials.gov and Drugs@FDA: A Comparison of Results Reporting for New Drug Approval Trials

Lisa M. Schwartz; Steven Woloshin; Eugene Zheng; Tony Tse; Deborah A. Zarin

Sponsors are required by federal law to submit summary results of applicable clinical trials (including those beyond phase 1 supporting U.S. Food and Drug Administration [FDA] new drug approvals) to ClinicalTrials.gov for public posting (1). Submissions consist of minimum basic results data elements in tabular format, including results for all primary and secondary outcomes prespecified in the study protocol and all anticipated and unanticipated serious adverse events observed during the trial (2). This law also requires ClinicalTrials.gov to assess ways to verify the accuracy of sponsor-submitted results information, including using public sources, such as FDA advisory committee summary documents and FDA action package approval documents (3). Although ClinicalTrials.gov currently determines internal consistency through quality checks (4), the validity of posted results can be assessed only by comparing submitted data with external reference standards. Recent studies comparing ClinicalTrials.gov data with peer-reviewed journal publications suggest that discrepancies in reported primary and secondary outcomes, numerical results, and adverse events are relatively common, although which source is more likely to be correct is unclear (57). Drug approval packages from the FDA may represent a better reference standard than publications for validating results posted on ClinicalTrials.gov, because journal editors and peer reviewers typically lack access to individual-participant data. Consequently, investigators may choose to publish outcomes based largely on statistical significance or other criteria (810). In contrast, FDA statisticians, who have access to individual-participant data, can analyze sponsor-submitted trial results independently on the basis of what they believe are the best statistical practices (11, 12). Independent analysis of individual-participant data from a trial may yield treatment effects that range in direction, magnitude, and statistical significance according to the particular outcome selected and how it is analyzed (for example, discretion in selecting measurement populations, such as intention-to-treat vs. per-protocol population; accounting for missing data; or timing for outcomes assessment). For example, on the basis of 6-month results, a high-profile journal article concluded that celecoxib reduced major bleeding compared with ibuprofen and diclofenac (13). However, the FDA reviews, which included results for the protocol-specified 1-year end points, indicated that celecoxib did not reduce major bleeding (14). We compared sponsor-submitted definitions and results posted on ClinicalTrials.gov with corresponding FDA-generated information posted on Drugs@FDA for trials used to support new drug approvalsspecifically, how often efficacy and adverse event outcomes could be compared and whether posted data were consistent. Methods Sample Collection To identify 100 parallel-group, randomized trials that were the basis for FDA new drug approvals (that is, new molecular entities), we searched Drugs@FDA (15) beginning with approvals on 1 January 2013 (Figure). Each FDA approval package includes review documents written by FDA staff (such as physicians, statisticians, and pharmacologists). These documents, which summarize analyses of clinical and other data submitted in new drug applications, are used by the FDA to determine whether to approve marketing of new drugs or biologic products for a particular use (16). Although the FDA has made some drug approval packages and component review documents publicly available on Drugs@FDA since 1997 (12), recent federal law now requires systematic posting of action packages for original new drug applications (1). Figure. Trial search and selection. FDA = U.S. Food and Drug Administration. * The 20 unmatched trials were from 8 new drug reviews: 4 had other matched trials, 4 did not. The 50 trials without results in both sources were from 21 new drug reviews: 12 had other matched trials, 9 did not. We manually searched FDA medical and statistical reviews to find all trials designated as pivotal and supportive by the FDA reviewer. We then sought to match these trials with the corresponding results in ClinicalTrials.gov, downloaded on 15 March 2015. Although ClinicalTrials.gov and Drugs@FDA were created for different purposes, their content overlaps substantially (Table 1). Because FDA review documents do not list ClinicalTrials.gov identifiers (NCT numbers), we used the process described in the Figure to match trials between sources. We searched Drugs@FDA through July 2014 until reaching our target: 75 pivotal and 25 supportive parallel-group, randomized trials with some results data in both sources, comprising all trials that could be compared during this time frame. We hypothesized that documents available from Drugs@FDA would contain less results information for supportive trials than for pivotal trials, which provide the primary evidence for approval. Table 1. Comparison of ClinicalTrials.gov and FDA Reviews on Drugs@FDA Data Extraction We created a structured data extraction form to capture detailed trial information from ClinicalTrials.gov and Drugs@FDA and revised it after a pilot test extracting information for 5 trials. The 6 major domains (Appendix Table 1) were as follows: 1) trial characteristics, including drug indication, development phase, blinding, comparator, and basic trial data (number of patients randomly assigned, number of patients completing the study, age, and sex distribution); 2) primary outcome, including definition using the following framework: domain, specific measurement, specific metric, and method of aggregation (4) plus time frame, analysis (measurement population and methods to account for missing data), result values (consistency of number analyzed and results for each study group), and treatment effect (consistency in treatment effect and associated CI and P value between experimental and control groups); 3) secondary outcomes (number, data availability, and outcome); 4) serious adverse events (number analyzed and consistency of results); 5) deaths (whether they were reported or it was mentioned that no deaths occurred; consistency of results); and 6) number of other adverse events. Appendix Table 1. Overview of Data Extracted From ClinicalTrials.gov and FDA Reviews on Drugs@FDA and Definitions of Discrepancies Between These Sources In contrast to ClinicalTrials.gov records, which typically present only a single set of analyses per outcome, Drugs@FDA review documents often present multiple analyses, including those conducted by the sponsor and separately by the FDA statistician (such as sensitivity analyses with different measurement populations or different imputation methods for missing data). We extracted results from the FDA statistical reviewers independent analyses (available for two thirds of primary outcomes) or, if unavailable, from the drug companys analyses, provided that the statistical reviewer explicitly indicated agreement with them (remaining primary outcomes). Two assessors systematically extracted, and another verified, the trial design, primary and secondary outcomes, adverse events, and deaths from both ClinicalTrials.gov and Drugs@FDA. Data Comparison Consistency of Outcome Definitions and Analyses The number and definitions of primary and secondary outcomes posted on ClinicalTrials.gov (and concordance in outcome level) were compared with those listed in Drugs@FDA. Definitions of primary outcome were considered discordant if a mismatch occurred at any level of the following framework: domain (such as anxiety), specific measurement (such as Beck Anxiety Inventory), specific metric (such as change from baseline), or method of aggregation (such as mean). We also used an alternative definition of discordance that excludes the method of aggregation to account for researchers who feel it is unnecessary to prespecify statistical analysis plans before trial unblinding (11). In addition, we compared timing of the outcome assessment plus 3 key aspects of the primary outcome analyses: measurement population, crude or adjusted analysis, and method of handling missing data. Consistency of Results We assessed the consistency of results reporting between ClinicalTrials.gov and Drugs@FDA using the approach adopted from Hartung and colleagues (7) (detailed in Appendix Table 1). For example, results for the outcome measure change from baseline HbA1c (hemoglobin A1c) were considered discordant if the reported values were not consistent to 1 decimal place (for example, 0.094 is not consistent with 0.12 because it rounds to 0.09, but 0.115 would be consistent because it rounds to 0.12). We analyzed the data at 2 levels: numbers of trials, and numbers of primary outcomes or named serious adverse events, including death. Although the latter approach explicitly shows the frequency of discrepancies for individual measures, it may overstate the distribution of discrepancies among trials, because the number of potential discrepancies is the product of the outcomes times the number of study groups. Reporting numbers of trials mitigates this problema few discordant trials (even with many outcomes) would not overwhelm most concordant trials. This approach, however, created an important challenge: How many discordant outcomes (or study groups) does it take to deem a trial discordant between the 2 sources? We used the Hartung approach and called a trial discordant if data from ClinicalTrials.gov and Drugs@FDA were inconsistent for 1 or more results; concordant if all were consistent; and cannot compare if, in both sources, the outcomes did not match or the data were not posted. Role of the Funding Source Drs. Woloshin and Schwartz were funded by a contract from the National Library of Medicine. Drs. Tse and Zarin were supported in part by the Intramural Research Program of the National Library of Medicine, National Institutes of He


JAMA Internal Medicine | 2015

A Randomized Trial Testing US Food and Drug Administration “Breakthrough” Language

Tamar Krishnamurti; Steven Woloshin; Lisa M. Schwartz; Baruch Fischhoff

patients with atrial fibrillation with high global cardiovascular risk, the yield for detecting ischemia was 5.2%, and ischemia was not associated with increased mortality in this population. These findings suggest that these asymptomatic patients do not benefit from MPI. Harms of MPI include increased cancer risk from the high radiation burden (6-37 mSv depending on protocol, which is equivalent to hundreds or thousands of chest radiographs). These findings support a change in the AUC rating to the rarely appropriate category for this indication, and because MPI has definite harms without benefits for asymptomatic patients with atrial fibrillation this Research Letter merits the Less is More designation.


JAMA | 2014

US Food and Drug Administration and Design of Drug Approval Studies

Steven Woloshin; Lisa M. Schwartz; Brittney Frankel; Adrienne Faerber

US Food and Drug Administration and Design of Drug Approval Studies To enhance protocol quality, federal regulations encourage but do not require meetings between pharmaceutical companies and the US Food and Drug Administration (FDA) during the design phase of pivotal studies assessing drug efficacy and safety for the proposed indication.1 These meetings often generate FDA recommendations for improving research, although companies are not bound to follow them. Companies can also request special protocol assessments (SPAs) in which the FDA formally reviews the protocol.2 When the FDA endorses an SPA, it agrees not to object to study design, outcomes, or analytic issues when it ultimately reviews the drug for approval, provided the company conducted the trial as planned. We describe interactions between the FDA and pharmaceutical companies to learn how the FDA influences pivotal study design of new drugs.


Archive | 2003

Screening Men for Prostate and Colorectal Cancer in the United States

Brenda E. Sirovich; Lisa M. Schwartz; Steven Woloshin


Effective clinical practice : ECP | 1999

How Can We Help People Make Sense of Medical Data

Steven Woloshin; Lisa M. Schwartz


Archive | 2011

Time Trends in Pulmonary Embolism in the United States

Renda Soylemez Wiener; Lisa M. Schwartz; Steven Woloshin

Collaboration


Dive into the Lisa M. Schwartz's collaboration.

Top Co-Authors

Avatar

Steven Woloshin

The Dartmouth Institute for Health Policy and Clinical Practice

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Abigail T. Kennedy

The Dartmouth Institute for Health Policy and Clinical Practice

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Baruch Fischhoff

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge