Michele Freeman
Oregon Health & Science University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Michele Freeman.
JAMA | 2011
Devan Kansagara; Honora Englander; Amanda H. Salanitro; David Kagen; Cecelia Theobald; Michele Freeman; Sunil Kripalani
CONTEXT Predicting hospital readmission risk is of great interest to identify which patients would benefit most from care transition interventions, as well as to risk-adjust readmission rates for the purposes of hospital comparison. OBJECTIVE To summarize validated readmission risk prediction models, describe their performance, and assess suitability for clinical or administrative use. DATA SOURCES AND STUDY SELECTION The databases of MEDLINE, CINAHL, and the Cochrane Library were searched from inception through March 2011, the EMBASE database was searched through August 2011, and hand searches were performed of the retrieved reference lists. Dual review was conducted to identify studies published in the English language of prediction models tested with medical patients in both derivation and validation cohorts. DATA EXTRACTION Data were extracted on the population, setting, sample size, follow-up interval, readmission rate, model discrimination and calibration, type of data used, and timing of data collection. DATA SYNTHESIS Of 7843 citations reviewed, 30 studies of 26 unique models met the inclusion criteria. The most common outcome used was 30-day readmission; only 1 model specifically addressed preventable readmissions. Fourteen models that relied on retrospective administrative data could be potentially used to risk-adjust readmission rates for hospital comparison; of these, 9 were tested in large US populations and had poor discriminative ability (c statistic range: 0.55-0.65). Seven models could potentially be used to identify high-risk patients for intervention early during a hospitalization (c statistic range: 0.56-0.72), and 5 could be used at hospital discharge (c statistic range: 0.68-0.83). Six studies compared different models in the same population and 2 of these found that functional and social variables improved model discrimination. Although most models incorporated variables for medical comorbidity and use of prior medical services, few examined variables associated with overall health and function, illness severity, or social determinants of health. CONCLUSIONS Most current readmission risk prediction models that were designed for either comparative or clinical purposes perform poorly. Although in certain settings such models may prove useful, efforts to improve their performance are needed as use becomes more widespread.
Annals of Internal Medicine | 2009
David I Buckley; Rongwei Fu; Michele Freeman; Kevin Rogers; Mark Helfand
In the United States, cardiovascular disease accounts for nearly 40% of all deaths each year (1). The factors that make up the Framingham risk score (age, sex, blood pressure, serum total cholesterol or low-density lipoprotein cholesterol level, high-density lipoprotein cholesterol level, cigarette smoking, and diabetes) account for most of the excess risk for incident coronary heart disease (CHD) (2, 3). However, these factors do not explain all of the excess risk (4, 5), and approximately 40% of CHD deaths occur in persons with cholesterol levels that are lower than the population average (6). Several lines of evidence (7, 8) have implicated chronic inflammation in CHD, and inflammatory markers have received much attention as new or emerging risk factors that could account for some of the unexplained variability in CHD risk. C-reactive protein (CRP) is a sensitive, nonspecific systemic marker of inflammation (9). Although it is unknown whether CRP is involved in CHD pathogenesis (10, 11), elevated serum CRP levels are associated with traditional cardiovascular risk factors and obesity (12, 13). In 2002, an expert panel recommended against routine use of CRP in risk assessment for primary prevention of CHD but supported CRP measurement in persons with a 10-year CHD risk of 10% to 20%. It noted that the benefits of this strategy remain uncertain and recommended further research into the implications of using CRP in risk categorization for therapeutic risk reduction in patients (14). The potential clinical benefit of new risk factors for refining global risk assessment is thought to be greatest for persons who are classified as intermediate-risk when stratified by using conventional risk factors (15). In the Framingham risk scoring system, intermediate-risk persons are those with a 10% to 20% risk for coronary death or nonfatal myocardial infarction (hard CHD events) over 10 years. Further stratification by using new markers might reclassify some intermediate-risk persons as low-risk (10-year risk<10%) and others as high-risk (10-year risk >20%). This would permit more aggressive risk reduction therapy in persons reclassified as high-risk and may consequently reduce incident CHD events (16). Several previous meta-analyses (1719) have assessed the possible independent predictive ability of CRP level for incident CHD risk. In 1998, a meta-analysis of 5 long-term, population-based prospective cohort studies and 2 cohorts of patients with preexisting CHD (17) calculated a risk ratio for coronary events of 1.7 (95% CI, 1.4 to 2.1) for CRP levels in the top tertile versus the bottom tertile. An update of this meta-analysis in 2000 (18) included 7 additional studies. The combined risk ratio for the 11 population-based prospective cohort studies of persons without preexisting CHD was 2.0 (CI, 1.6 to 2.5). Another update in 2004 (19) included 11 new studies as well as the 11 previous cohorts. The combined odds ratio for all 22 studies was 1.58 (CI, 1.48 to 1.69). These 3 meta-analyses, however, lacked a systematic assessment of the characteristics and quality of study design and execution. In particular, they did not systematically assess the degree of adjustment for standard measures of CHD risk (such as the Framingham risk score). Although the first 2 meta-analyses reported the degree of adjustment for potential confounders in each of the included studies, they did not specify how many or which standard coronary risk factors were adjusted for. Furthermore, these meta-analyses did not use the degree of adjustment as a basis for quality rating or inclusion. The most recent meta-analysis (19) did not rate quality or degree of adjustment for potential confounders. In addition, because the investigators used broad inclusion criteria, the studies in these meta-analyses do not necessarily represent the intermediate-risk population. We conducted a systematic review and meta-analyses of epidemiologic studies to help the U.S. Preventive Services Task Force (USPSTF) determine whether CRP level should be incorporated into guidelines for coronary and cardiovascular risk assessment in primary care. Our review addresses the question of whether elevated CRP levels are independently predictive of incident CHD events, specifically among intermediate-risk persons. Our approach incorporated elements previously used by the USPSTF (20) and several domains of the approach developed by the Grading of Recommendations, Assessment, Development, and Evaluation workgroup (21). Methods Data Sources and Searches We searched MEDLINE for original epidemiologic studies published between 1966 and November 2007. Our search strategy included the terms cardiovascular diseases, C-reactive protein, inflammation, and biological markers and was limited to articles published in English. We obtained additional articles from recent systematic reviews; reference lists of pertinent studies, reviews, editorials, and Web sites; and consultations with experts. Study Selection We included studies that published original data relevant to measuring the increased risk for incident CHD associated with elevated CRP level. We only considered prospective cohort studies (including those based on a cohort within a randomized trial), casecohort studies, and nested casecontrol studies. We only included studies that had a follow-up of 2 years or more, reported the outcomes of coronary death and nonfatal myocardial infarction, and adjusted for a minimum of 5 of the 7 risk factors used in the Framingham risk score. We excluded studies in which no participants were likely to be classified as intermediate-risk by using the Framingham risk score and those conducted exclusively in patients with previously diagnosed coronary disease, coronary disease equivalents (such as diabetes), or medical conditions that may cause premature CHD. We included studies in which some patients had cardiovascular disease at baseline only if the studies adjusted for prevalent disease in their analysis. The full systematic evidence report (22) provides a more detailed description of our study methods. Data Extraction and Quality Assessment One investigator reviewed the relevant articles and recorded overlap with the studies included in previous meta-analyses. For our meta-analyses, when multiple articles were published from a single cohort, we included the findings from the analysis with the highest applicability to the study question and the highest validity, on the basis of our quality ratings. In general, we selected cohort studies over nested casecontrol studies, good-quality studies over fair-quality studies, studies that adjusted for more Framingham risk variables, studies with longer follow-up, and studies that most closely addressed our principal question. We used standardized forms to abstract data on study design, population, size, CRP measurement, Framingham risk factor measurement, length of follow-up, outcomes, and data analysis. For each study, we recorded how many Framingham risk factors and other confounding factors were included in the model; whether the investigators reported model fit measures, discrimination measures, or model calibration statistics separately for models with and without CRP; and whether the study assessed the degree to which persons were reclassified on the basis of CRP level, overall or in the intermediate-risk group. Two investigators used the USPSTF criteria (20) to independently assess the quality of each study as good, fair, or poor. These criteria are specific to the study design (cohort or nested casecontrol) and include such items as appropriate assembly or ascertainment of the cohort or the case patients and control participants, reliability and equal application of measurements, response or follow-up rate, and appropriate adjustment for confounding. Because we sought to evaluate the predictive ability of CRP independent of the Framingham risk factors, we required that a study adjust for all 7 of the Framingham variables to receive a quality rating of good, even if the study otherwise had high internal validity. We resolved disagreements regarding quality by discussion, further review, and adjudication by a third reviewer (if necessary). Data Synthesis and Analysis The ideal approach to assessing the clinical effect of expanding the Framingham risk score has been debated extensively. Most previous research on the effect of a new risk factor has focused on the c-statistic, a measure of discrimination. The c-statistic, however, may be a poor indicator of the effect of using CRP level to further stratify persons classified as intermediate-risk by the Framingham risk score. For this reason, recent literature (2326) has emphasized that studies should examine how well assessing CRP level improves risk prediction and further risk stratification among persons initially classified as intermediate-risk. Most studies provided an overall estimate of the risk associated with high CRP levels, after adjustment for other risk factors, but did not provide specific evidence about the intermediate-risk group. For these studies, we conducted 2 meta-analyses to obtain pooled adjusted risk ratios for the association of hard CHD events and CRP level. The first included all studies that were fair-quality or better, adjusted for at least 5 Framingham risk factors, included at least some participants who were likely to be at intermediate risk, and estimated the risk for CHD associated with CRP level after adjusting for confounders. Because including studies that had methodological flaws or assessed fewer Framingham risk factors could have led to overestimation of the pooled risk ratio, we conducted a second meta-analysis that was restricted to good-quality studies, all of which adjusted for all Framingham risk factors. Because different studies reported ratios for different cutoff levels (including tertiles, quartiles, or quintiles), or as an increase in risk for a given unit of increas
Journal of General Internal Medicine | 2008
Linda Humphrey; Rongwei Fu; David I Buckley; Michele Freeman; Mark Helfand
BACKGROUNDPeriodontal disease is common among adults in the US and is a potential source of chronic inflammation. Recent data have suggested an important role for chronic inflammation in the development of coronary heart disease (CHD).OBJECTIVETo aid the United States Preventive Services Task Force (USPSTF) in evaluating whether periodontal disease is an independent novel risk factor for incident CHD.METHODSStudies were identified by searching Medline (1966 through March 2008) and reviewing prior systematic reviews, reference lists, and consulting experts. Prospective cohort studies that assessed periodontal disease, Framingham risk factors, and coronary heart disease incidence in the general adult population without known CHD were reviewed and quality rated using criteria developed by the USPSTF. Meta-analysis of good and fair quality studies was conducted to determine summary estimates of the risk of CHD events associated with various categories of periodontal disease.RESULTSWe identified seven articles of good or fair quality from seven cohorts. Several studies found periodontal disease to be independently associated with increased risk of CHD. Summary relative risk estimates for different categories of periodontal disease (including periodontitis, tooth loss, gingivitis, and bone loss) ranged from 1.24 (95% CI 1.01–1.51) to 1.34 (95% CI 1.10–1.63). Risk estimates were similar in subgroup analyses by gender, outcome, study quality, and method of periodontal disease assessment.CONCLUSIONPeriodontal disease is a risk factor or marker for CHD that is independent of traditional CHD risk factors, including socioeconomic status. Further research in this important area of public health is warranted.
Mayo Clinic Proceedings | 2008
Linda Humphrey; Rongwei Fu; Kevin Rogers; Michele Freeman; Mark Helfand
OBJECTIVE To determine whether an elevated homocysteine level is an independent risk factor for the development of coronary heart disease (CHD) to aid the US Preventive Services Task Force in its evaluation of novel risk factors for incident CHD. METHODS Studies of homocysteine and CHD were identified by searching MEDLINE (1966 through March 2006). We obtained additional articles by reviewing reference lists from prior reviews, original studies, editorials, and Web sites and by consulting experts. We included prospective cohort studies that measured homocysteine and Framingham risk factors and the incidence of CHD in the general adult population without known CHD. Each study was quality rated using criteria developed by the US Preventive Services Task Force. We conducted a meta-analysis using a random-effects model to determine summary estimates of the risk of major CHD associated with each 5-micromol/L increase in homocysteine level. The systematic review and meta-analysis were conducted between January 25, 2005, and September 17, 2007. RESULTS We identified 26 articles of good or fair quality. Most studies found elevations of 20% to 50% in CHD risk for each increase of 5 micromol/L in homocysteine level. Meta-analysis yielded a combined risk ratio for coronary events of 1.18 (95% confidence interval, 1.10-1.26) for each increase of 5 micromol/L in homocysteine level. The association between homocysteine and CHD was similar when analyzed by sex, length of follow-up, outcome, study quality, and study design. CONCLUSION Each increase of 5 micromol/L in homocysteine level increases the risk of CHD events by approximately 20%, independently of traditional CHD risk factors.
Annals of Internal Medicine | 2011
Devan Kansagara; Rongwei Fu; Michele Freeman; Fawn Wolf; Mark Helfand
BACKGROUND The benefits and harms of intensive insulin therapy (IIT) titrated to strict glycemic targets in hospitalized patients remain uncertain. PURPOSE To evaluate the benefits and harms of IIT in hospitalized patients. DATA SOURCES MEDLINE and Cochrane Database of Systematic Reviews from 1950 to January 2010, reference lists, experts, and unpublished sources. STUDY SELECTION English-language randomized, controlled trials comparing protocols titrated to strict or less strict glycemic targets. DATA EXTRACTION Two reviewers independently abstracted data from each study on sample, setting, glycemic control interventions, glycemic targets, mean glucose levels achieved, and outcomes. Results were grouped by patient population or setting. A random-effects model was used to combine trial data on short-term mortality (≤28 days), long-term mortality (90 or 180 days), infection, length of stay, and hypoglycemia. The Grading of Recommendations Assessment, Development, and Evaluation system was used to rate the overall body of evidence for each outcome. DATA SYNTHESIS In a meta-analysis of 21 trials in intensive care unit, perioperative care, myocardial infarction, and stroke or brain injury settings, IIT did not affect short-term mortality (relative risk, 1.00 [95% CI, 0.94 to 1.07]). No consistent evidence showed that IIT reduced long-term mortality, infection rates, length of stay, or the need for renal replacement therapy. No evidence of benefit from IIT was reported in any hospital setting, although the best evidence for lack of benefit was in intensive care unit settings. Data combined from 10 trials showed that IIT was associated with a high risk for severe hypoglycemia (relative risk, 6.00 [CI, 4.06 to 8.87]; P < 0.001). Risk for IIT-associated hypoglycemia was increased in all hospital settings. LIMITATIONS Methodological shortcomings and inconsistencies limit the data in perioperative care, myocardial infarction, and stroke or brain injury settings. Differences in insulin protocols and patient and hospital characteristics may affect generalizability across treatment settings. CONCLUSION No consistent evidence demonstrates that IIT targeted to strict glycemic control compared with less strict glycemic control improves health outcomes in hospitalized patients. Furthermore, IIT is associated with an increased risk for severe hypoglycemia. PRIMARY FUNDING SOURCE U.S. Department of Veterans Affairs Health Services Research and Development Service.
Journal of General Internal Medicine | 2008
Somnath Saha; Michele Freeman; Joahd Toure; Kimberly M Tippens; Christine Weeks; Said A. Ibrahim
ObjectivesTo better understand the causes of racial disparities in health care, we reviewed and synthesized existing evidence related to disparities in the “equal access” Veterans Affairs (VA) health care system.MethodsWe systematically reviewed and synthesized evidence from studies comparing health care utilization and quality by race within the VA.ResultsRacial disparities in the VA exist across a wide range of clinical areas and service types. Disparities appear most prevalent for medication adherence and surgery and other invasive procedures, processes that are likely to be affected by the quantity and quality of patient–provider communication, shared decision making, and patient participation. Studies indicate a variety of likely root causes of disparities including: racial differences in patients’ medical knowledge and information sources, trust and skepticism, levels of participation in health care interactions and decisions, and social support and resources; clinician judgment/bias; the racial/cultural milieu of health care settings; and differences in the quality of care at facilities attended by different racial groups.ConclusionsExisting evidence from the VA indicates several promising targets for interventions to reduce racial disparities in the quality of health care.
Pediatrics | 2007
Elizabeth M Haney; Laurie Hoyt Huffman; Christina Bougatsos; Michele Freeman; Robert D. Steiner; Heidi D. Nelson
OBJECTIVE. This was a systematic evidence review for the US Preventive Services Task Force, intended to synthesize the published evidence regarding the effectiveness of selecting, testing, and managing children and adolescents with dyslipidemia in the course of routine primary care. METHODS. Literature searches were performed to identify published articles that addressed 10 key questions. The review focused on screening relevant to primary care of children without previously identified dyslipidemias, but included treatment trials of children with dyslipidemia because some drugs have only been tested in that population. RESULTS. Normal values for lipids for children and adolescents are defined according to population levels (percentiles). Age, gender, and racial differences and temporal trends may alter these statistical cut points. Approximately 40% to 55% of children with elevated total cholesterol and low-density lipoprotein levels will continue to have elevated lipid levels on follow-up. Current screening recommendations based on family history will fail to detect substantial numbers (30%–60%) of children with elevated lipid levels. Drug treatment for dyslipidemia in children has been studied and shown to be effective only for suspected or proven familial monogenic dyslipidemias. Intensive dietary counseling and follow-up can result in improvements in lipid levels, but these results have not been sustained after the cessation of the intervention. The few trials of exercise are of fair-to-poor quality and show little or no improvements in lipid levels for children without monogenic dyslipidemias. Although reported adverse effects were not serious, studies were generally small and not of sufficient duration to determine long-term effects of either short or extended use. CONCLUSIONS. Several key issues about screening and treatment of dyslipidemia in children and adolescents could not be addressed because of lack of studies, including effectiveness of screening on adult coronary heart disease or lipid outcomes, optimal ages and intervals for screening children, or effects of treatment of childhood lipid levels on adult coronary heart disease outcomes.
Annals of Internal Medicine | 2014
Devan Kansagara; Joel Papak; Amirala S. Pasha; Maya Elin O'Neil; Michele Freeman; Rose Relevo; Ana R. Quiñones; Makalapua Motu'apuaka; Janice H. Jou
Hepatocellular carcinoma (HCC) incidence and mortality have increased internationally over the past 4 decades (1, 2), with localized tumors accounting for most of the increase (3). The rationale for screening is that imaging tests, such as ultrasonography, may identify patients with early-stage HCC (4), and several potential options exist for treating patients with early-stage HCC, including liver transplantation, radiofrequency ablation, and liver resection (5). Several professional societies currently recommend HCC screening using imaging studies and tumor markers, primarily in patients at higher risk for HCC due to chronic hepatitis B or cirrhosis (57). However, recommendations for HCC screening remain controversial, in part because of concerns over the quality and paucity of existing evidence and because concerns about overdiagnosis and patient harms have been raised in other cancer screening programs (812). We conducted a systematic review of the published literature to better understand the incremental benefits and harms of routine HCC screening compared with clinical diagnosis. Methods This manuscript is part of a larger report commissioned by the Veterans Health Administration (13). A protocol describing the review plan was posted to a public Web site before the study was initiated (14). The analytic framework that guided this review was developed in collaboration with a panel of technical experts and is provided in Figure 1 of Supplement 1. Supplement 1. Figures Data Sources and Searches We searched MEDLINE, PsycINFO, the Cochrane Central Register of Controlled Trials, the Cochrane Database of Systematic Reviews, and ClinicalTrials.gov from database inception to June 2013. We updated the MEDLINE, PsycINFO, and ClinicalTrials.gov searches in April 2014. The detailed search strategy is provided in Supplement 2. We obtained additional articles from systematic reviews, reference lists of pertinent studies, reviews, and editorials and by consulting technical advisors. Supplement 2. Search Strategy Study Selection Detailed inclusion and exclusion criteria are provided in Supplement 3. We included English-language, controlled clinical trials and observational studies that assessed the effects of screening on HCC-specific and all-cause mortality in adult populations. We used the term screening to include any surveillance or screening program in which specific tests (ultrasonography, computed tomography, magnetic resonance imaging, or -fetoprotein measurement) were performed explicitly to detect HCC in asymptomatic patients. Studies had to include a comparison group of patients who did not have routine screening. We excluded observational studies that did not consider important confounding factors, such as age, sex, and liver disease severity. Because we anticipated few clinical trials comparing screening versus no screening, we also included trials comparing frequencies of screening. We included studies of any population with chronic liver disease with or without cirrhosis but excluded studies of patients with prior HCC. We also searched for systematic reviews and primary studies that focused on potential harms of HCC screening. Supplement 3. Inclusion/Exclusion Criteria Seven investigators reviewed the titles and abstracts of citations identified from literature searches. If at least 1 reviewer indicated that a citation may be relevant, a second reviewer screened the citation for concordance. Two reviewers independently assessed the full-text articles for inclusion using the eligibility criteria in Supplement 3. Disagreements were resolved through consensus. Data Extraction and Quality Assessment From each study, we abstracted study design, objectives, setting, population characteristics (including sex, age, race or ethnicity, and liver disease cause and severity), patient eligibility and exclusion criteria, number of patients, years of enrollment, method and frequency of screening, adjusted and unadjusted mortality, and adverse events. A second author checked each entry for accuracy. Two reviewers independently assessed the quality of each trial by using a tool developed by the Cochrane Collaboration (15). We resolved disagreements through discussion. Each trial was given an overall summary assessment of low, high, or unclear risk of bias. Two reviewers graded the strength of evidence for outcomes by using published criteria that consider the consistency, coherence, directness, and applicability of a body of evidence as well as the internal validity of individual studies (16). We adapted existing tools to assess the quality of observational studies (1719). We do not report an overall summary assessment for observational studies because there are no validated criteria for doing so. Data Synthesis and Analysis We qualitatively synthesized the evidence on the benefits and harms of HCC screening. Clinical heterogeneity and the small number of trials precluded a meta-analysis of the findings. Role of the Funding Source The U.S. Department of Veterans Affairs Quality Enhancement Research Initiative supported this review but had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; or decision to submit the manuscript for publication. Results The electronic and manual searches yielded 13801 total citations, from which we identified 286 potentially relevant full-text articles. Twenty-two primary studies contained primary data relevant to the efficacy of HCC screening and met our inclusion criteria (Figure). Figure. Summary of evidence search and selection. RCT = randomized, controlled trial. Effects of Screening on Mortality Two trials and 18 observational studies provided very-low-strength evidence from which to draw conclusions about the mortality effects of HCC screening compared with no screening. The trials had substantial methodological flaws that threatened their internal validity, and their findings have limited applicability beyond the patient population with hepatitis B. The observational studies, most of which included patients with cirrhosis and hepatitis B, hepatitis C, or alcoholic liver disease, showed that screening detects patients with earlier-stage disease, who more frequently receive curative therapy. However, it is impossible to say whether the longer survival in patients with screen-detected disease was a true effect of screening or reflects lead- and length-time biases inherent to all observational studies, as well as selection biases that were common in many of the studies. Randomized, Controlled Trials Two community-based trials compared the effects on mortality of screening versus no screening (20, 21). Both were conducted in China in areas with high prevalence of HCC, and most participants had hepatitis B with or without cirrhosis (Table 1 of Supplement 4). One cluster randomized trial recruited screening group participants (n= 9757) from 1993 to 1995 and offered them serum -fetoprotein testing and ultrasonography every 6 months. Participants in the control group (n= 9443) were not made aware of the study nor actively followed. Death from HCC occurred less frequently in the screening group (83.2 vs. 131.5 per 100000 person-years; rate ratio, 0.63 [95% CI, 0.41 to 0.98]). Supplement 4. Tables However, the trial had several serious methodological limitations that gave it a high risk of bias (Table 2 of Supplement 4). One major concern is whether patients in both groups had the same risk for HCC. There is no information about randomization technique or allocation concealment and very little information about the baseline characteristics of the 2 groups, which is especially important in cluster randomized trials. Another concern is that weak methods used to ascertain the outcome measuredeath from HCCcould have introduced bias. If deaths were underreported in the control group, results could have been biased toward the null. On the other hand, if outcome adjudicators were not blinded, more deaths in the control group could have been misclassified as HCC-related, especially because the symptoms that define stage III HCC (cachexia, jaundice, and ascites) overlap substantially with symptoms of end-stage liver disease and no data were provided about liver disease severity in either group. Selective reporting and analysis of favorable outcomes were other concerns. Although the authors reported that vital status was available for all patients, overall mortality was not reported and there was no statistical adjustment for the effects of clustering. Finally, the study is less applicable to patients in the United States, in whom cirrhosis and thus HCC are usually secondary to hepatitis C, and the results probably have limited applicability to contemporary practice, in which the threshold for imaging for symptoms may be lower and the number of patients with incidentally discovered HCC on imaging is higher. The second trial used patient-level randomization stratified by township to assign patients with hepatitis B from 1989 to 1992 to the screening intervention (n= 3712), which consisted of serial -fetoprotein tests followed by ultrasonography for high -fetoprotein values, or the usual care group (n= 1869) (21). The population-based cancer registry used active case-finding techniques, and mortality was ascertained through the cancer registry and a population-based vital status registry. Cancer staging and cause of death were assessed by personnel blinded to intervention status. Only 28.8% of screening group participants completed all scheduled testing, but all participants completed at least 1 screening test. Fewer patients had stage III HCC in the screening group (19.8% vs. 41.0%; P value not reported). Hepatocellular carcinoma mortality was similar in both groups (1138 vs. 1114 per 100000 person-years; P= 0.86), as was all-cause mortality (1843 vs. 1788 per 100000 person-years; P value not s
Pain Medicine | 2009
Steven K. Dobscha; Michael E. Clark; Benjamin J. Morasco; Michele Freeman; Rose Campbell; Mark Helfand
OBJECTIVE To review the literature addressing the assessment and management of pain in patients with polytraumatic injuries including traumatic brain injury (TBI) and blast-related headache, and to identify patient, clinician and systems factors associated with pain-related outcomes. DESIGN Systematic review. METHODS We conducted searches in MEDLINE of literature published from 1950 through July 2008. Due to a limited number of studies using controls or comparators, we included observational and rigorous qualitative studies. We systematically rated the quality of systematic reviews, cohort, and case-control design studies. RESULTS One systematic review, 93 observational studies, and one qualitative research study met inclusion criteria. The literature search yielded no published studies that assessed measures of pain intensity or pain-related functional interference among patients with cognitive deficits due to TBI, that compared patients with blast-related headache with patients with other types of headache, or that assessed treatments for blast-related headache pain. Studies on the association between TBI severity and pain reported mixed findings. There was limited evidence that the following factors are associated with pain among TBI patients: severity, location, and multiplicity of injuries; insomnia; fatigue; depression; and post-traumatic stress disorder. CONCLUSIONS Very little evidence is currently available to guide pain assessment and treatment approaches in patients with polytrauma. Further research employing systematic observational as well as controlled intervention designs is clearly indicated.
Pain Medicine | 2009
Mark Helfand; Michele Freeman
OBJECTIVE To review the literature addressing effective care for acute pain in inpatients on medical wards. METHODS We searched Medline, PubMed Clinical Queries, and the Cochrane Database for systematic reviews published in 1996 through April 2007 on the assessment and management of acute pain in inpatients, including patients with impaired self-report or chemical dependencies. We conducted a focused search for studies on the timing and frequency of assessment, and on the use of patient-controlled analgesia (PCA) for nonsurgical pain. Two investigators performed a critical analysis of the literature and compiled narrative summaries to address the key questions. RESULTS We found no evidence that directly linked the timing, frequency, or method of pain assessment with outcomes or safety in medical inpatients. There is good evidence that treating abdominal pain does not compromise timely diagnosis and treatment of the surgical abdomen. Pain management teams and other systemwide interventions improve assessment and use of analgesics, but do not clearly affect pain outcomes. The safety and effectiveness of PCA in medical patients have not been studied. There is weak evidence that most cognitively impaired individuals can understand at least one self-assessment measure. Almost no evidence is available to guide management of pain in delirium. Evidence for managing pain in patients with substance abuse disorders or chronic opioid use is weak, being derived from case reports, retrospective studies, and expert opinion. CONCLUSIONS Pain is a prevalent problem for medical inpatients. Clinical research is needed to guide the assessment and management of pain in this setting.