Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mark D. Aronson is active.

Publication


Featured researches published by Mark D. Aronson.


Journal of General Internal Medicine | 2005

What Can Hospitalized Patients Tell Us About Adverse Events? Learning from Patient-Reported Incidents

Saul N. Weingart; Odelya Pagovich; Daniel Z. Sands; Joseph Ming Wah Li; Mark D. Aronson; Roger B. Davis; David W. Bates; Russell S. Phillips

PURPOSE: Little is known about how well hospitalized patients can identify errors or injuries in their care. Accordingly, the purpose of this study was to elicit incident reports from hospital inpatients in order to identify and characterize adverse events and near-miss errors.SUBJECTS: We conducted a prospective cohort study of 228 adult inpatients on a medicine unit of a Boston teaching hospital.METHODS: Investigators reviewed medical records and interviewed patients during the hospitalization and by telephone 10 days after discharge about “problems,” “mistakes,” and “injuries” that occurred. Physician investigators classified patients’ reports. We calculated event rates and used multivariable Poisson regression models to examine the factors associated with patient-reported events.RESULTS: Of 264 eligible patients, 228 (86%) agreed to participate and completed 528 interviews. Seventeen patients (8%) experienced 20 adverse events; 1 was serious. Eight patients (4%) experienced 13 near misses; 5 were serious or life threatening. Eleven (55%) of 20 adverse events and 4 (31%) of 13 near misses were documented in the medical record, but none were found in the hospital incident reporting system. Patients with 3 or more drug allergies were more likely to report errors compared with patients without drug allergies (incidence rate ratio 4.7, 95% CI 1.7, 13.4).CONCLUSIONS: Inpatients can identify adverse events affecting their care. Many patient-identified events are not captured by the hospital incident reporting system or recorded in the medical record. Engaging hospitalized patients as partners in identifying medical errors and injuries is a potentially promising approach for enhancing patient safety.


The American Journal of Medicine | 1988

Prevalence and recognition of alcohol abuse in a primary care population

Paul D. Cleary; Merle Miller; Tom Bush; Warburg Mm; Thomas L. Delbanco; Mark D. Aronson

PURPOSE The purpose of this study was to assess the prevalence, physician recognition, and treatment of alcohol abuse among patients of 19 senior medical residents practicing in a hospital-based, primary care setting. PATIENTS AND METHODS Interviews of 242 outpatients were conducted, and alcohol abuse and dependence, as defined by the Diagnostic and Statistical Manual of Mental Disorders, third edition (DSM-III), were determined using the Diagnostic Interview Schedule. RESULTS Twenty percent of the patients studied had abused or were dependent on alcohol at some time in their lives and 5 percent reported abuse or dependence within the last year. Of the techniques studied, a short screening questionnaire (Short Michigan Alcohol Screening Test [SMAST]) was the most accurate way of identifying patients who abused alcohol, and physician assessments were more accurate than laboratory tests. Although the physicians were aware of serious alcohol problems among 77 percent of their patients who met DSM-III criteria for alcohol abuse or dependence in the previous year, they identified only 36 percent of their patients with less serious problems or past alcohol abuse. They had only discussed alcohol abuse with 67 percent of the patients they identified as alcohol abusers. CONCLUSION We conclude that a short screening questionnaire (SMAST) is an accurate means of identifying alcohol abuse. Despite the recognition of serious alcohol problems by the physicians, the problem is not addressed routinely even among patients that are recognized as alcoholic.


Gastroenterology | 2008

Effect of Institution-Wide Policy of Colonoscopy Withdrawal Time ≥7 Minutes on Polyp Detection

Mandeep Sawhney; Marcelo S. Cury; Naama Neeman; Long Ngo; Janet M. Lewis; Ram Chuttani; Douglas K. Pleskow; Mark D. Aronson

BACKGROUND & AIMS Practice guidelines recommend that endoscopists spend at least 7 minutes examining the colonic mucosa during colonoscopy withdrawal to optimize polyp yield. The aim of this study was to determine if the implementation of an institution-wide policy of colonoscopy withdrawal time > or = 7 minutes was associated with an increase in colon polyp detection. METHODS All 42 endoscopists at our institute were asked to attain a colonoscopy withdrawal time of at least 7 minutes. Compliance with 7-minute withdrawal time was recorded for all nontherapeutic colonoscopies. Polyp detection ratio (number of polyps detected divided by number of colonoscopies performed) was computed. Regression models were used to assess the association between compliance with 7-minute withdrawal time and polyp detection. RESULTS During the study period, 23,910 colonoscopies were performed. The average age of patients was 56.8 years, and 54% were female. Colon cancer screening or surveillance was the indication for 42.5% of colonoscopies. At the beginning of the study, the polyp detection ratio was 0.48. Compliance with 7-minute withdrawal time for nontherapeutic procedures increased from 65% at the beginning of the initiative to almost 100%. However, no increase in polyp detection ratio was noted over the same period for all polyps (slope, 0.0006; P = .45) or for polyps 1-5 mm (slope, 0.001; P = .26), 6-9 mm (slope, 0.002; P = .43), or > or = 10 mm (slope, 0.006; P = .13). No association was detected when only colonoscopies performed for screening or surveillance were analyzed. CONCLUSIONS An institution-wide policy of colonoscopy withdrawal time > or = 7 minutes had no effect on colon polyp detection.


Arteriosclerosis, Thrombosis, and Vascular Biology | 1992

Plasma lipids and lipoproteins and the incidence of cardiovascular disease in the very elderly. The Bronx Aging Study.

Peter Zimetbaum; William H. Frishman; Wee Lock Ooi; Melanie P. Derman; Mark D. Aronson; Lewis I. Gidez; Howard A. Eder

The Bronx Aging Study is a 10-year prospective investigation of very elderly volunteers (mean age at study entry, 79 years; range, 75-85 years) designed to assess risk factors for dementia and coronary and cerebrovascular (stroke) diseases. Entry criteria included the absence of terminal illness and dementia. All subjects (n = 350) included in this report had at least two lipid and lipoprotein determinations. Overall, more than one third of subjects showed at least a 10% change in lipid and lipoprotein levels between the initial and final measurements. Moreover, mean levels for women were consistently different than those for men, and because of this finding subjects were classified into potential-risk categories based on the changes observed by using their sex-specific lipid and lipoprotein distributions. The incidences of cardiovascular disease, dementia, and death were compared between risk groups. Proportional-hazards analysis showed that in men a consistently low high density lipoprotein cholesterol level (less than or equal to 30 mg/dl) was independently associated with the development of myocardial infarction (p = 0.006), cardiovascular disease (p = 0.002), or death (p = 0.002). For women, however, a consistently elevated low density lipoprotein cholesterol level (greater than or equal to 171 mg/dl) was associated with myocardial infarction (p = 0.032). Thus, low high density lipoprotein cholesterol remains a powerful predictor of coronary heart disease risk for men even into old age, while elevated low density lipoprotein cholesterol continues to play a role in the development of myocardial infarction in women. The findings suggest that an unfavorable lipoprotein profile increases the risk of cardiovascular morbidity and mortality even at advanced ages for both men and women.


Annals of Internal Medicine | 2004

Lipid Control in the Management of Type 2 Diabetes Mellitus: A Clinical Practice Guideline from the American College of Physicians

Vincenza Snow; Mark D. Aronson; E. Rodney Hornbake; Christel Mottur-Pilson; Kevin B. Weiss

Diabetes mellitus is a leading cause of morbidity and mortality in the United States. Type 2 diabetes mellitus is most common (90% to 95% of persons with diabetes) and affects older adults, particularly those older than 50 years of age. An estimated 16 million Americans have type 2 diabetes, and up to 800000 new diagnoses are made each year (1, 2). Most adverse diabetes outcomes are a result of vascular complications, which are generally classified as microvascular (such as retinopathy, nephropathy, and neuropathy, although the latter may not be entirely a microvascular disease) or macrovascular (such as coronary artery disease, cerebrovascular disease, and peripheral vascular disease). To prevent or diminish the progression of microvascular and macrovascular complications, recommended diabetes management necessarily encompasses both metabolic control and control of cardiovascular risk factors (3-5). The need for good glycemic control is supported by the Diabetes Control and Complications Trial (6) in type 1 diabetes and, more recently, the United Kingdom Prospective Diabetes Study in type 2 diabetes (7). In these studies, tight blood sugar control reduced microvascular complications such as nephropathy and retinopathy but had little effect on macrovascular outcomes. Up to 80% of patients with type 2 diabetes will develop or die of macrovascular disease, underscoring the importance of preventing macrovascular complications. In an effort to provide internists and other primary care physicians with effective management strategies for diabetes care, the American College of Physicians (ACP) decided to develop guidelines on the management of dyslipidemia, particularly hypercholesterolemia, in people with type 2 diabetes. A previous College guideline addressed the critical role of tight blood pressure control in type 2 diabetes mellitus (8, 9). The target audience for this guideline is all clinicians who care for patients with type 2 diabetes. The target patient population is all persons with type 2 diabetes, including those who already have some form of microvascular complication and, of particular importance, premenopausal women. In this guideline we address the following questions. 1. What are the benefits of tight lipid control for both primary and secondary prevention in type 2 diabetes? 2. What is the evidence for treating to certain target levels of low-density lipoprotein (LDL) cholesterol for patients with type 2 diabetes? 3. Are certain lipid-lowering agents more effective or beneficial in patients with type 2 diabetes? This guideline is based on the systematic review of the evidence presented in the background paper by Vijan and colleagues in this issue (10). When Vijan and colleagues analyzed benefit or effectiveness, only studies that measured clinical end points were included. The major clinical end points in trials used to support the evidence for these guidelines were all-cause mortality, cardiovascular mortality, and cardiovascular events (that is, myocardial infarction, stroke, and cardiovascular mortality). No studies of lipid-lowering therapy have been conducted solely in patients with diabetes. Moreover, many trials excluded patients with diabetes. The sample sizes of participants with diabetes were often small, and many studies reported results only for the combined groups. Thus, the reports included in this review are of the subgroup analyses for studies that included patients with diabetes. The review was stratified into 2 categories. The first category evaluated the effects of lipid management in primary prevention (that is, in patients without known coronary disease). The second category evaluated the effects in secondary prevention (that is, in patients with established coronary disease). A total of 12 lipid-lowering studies presented diabetes-specific data and reported clinical outcomes. A discussion of this evidence follows (for a more detailed description of methodology, refer to the background paper by Vijan and colleagues [10]). Primary Prevention Six studies of primary prevention in patients with diabetes were identified. The Air Force Coronary Atherosclerosis Prevention Study/Texas Coronary Atherosclerosis Prevention Study (AFCAPS/TexCAPS) randomly assigned patients with average cholesterol levels and lower than average high-density lipoprotein (HDL) cholesterol levels to lovastatin, 20 to 40 mg/d, or placebo (in addition to a low-fat and low-cholesterol diet) for an average follow-up of 5.2 years (11). Based on data from the Third National Health and Nutrition Examination Survey, mean total cholesterol level was 5.72 mmol/L (221 mg/dL), mean LDL cholesterol level was 3.88 mmol/L (150 mg/dL), and mean HDL cholesterol level was 0.93 mmol/L (36 mg/dL) for men and 1.03 mmol/L (40 mg/dL) for women. One hundred fifty-five patients had diabetes. Lovastatin therapy led to a relative risk of 0.56 (95% CI, 0.17 to 1.92) for any atherosclerotic cardiovascular event (first fatal or nonfatal myocardial infarction, unstable angina, or sudden cardiac death) and an absolute risk reduction of 0.04 (CI, 0.04 to 0.12), neither of which was statistically significant. The mean LDL cholesterol level at the end of the study was 2.97 mmol/L (115 mg/dL), and the mean HDL cholesterol level was 1.00 mmol/L (39 mg/dL). The Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial-Lipid-Lowering Trial (ALLHAT-LLT) randomly assigned patients 55 years of age and older who had hypertension and at least one other coronary heart disease (CHD) risk factor to pravastatin, 40 mg/d, or placebo (12). In the subgroup analysis of 3638 patients with type 2 diabetes, the relative risk for CHD events was 0.89 (CI, 0.71 to 1.10); the absolute risk reduction was not reported. This study has been criticized because of the smaller difference between LDL cholesterol levels in the control and intervention groups, which is probably due in part to contamination of the control group by publication of several other lipid-lowering trials during the study. The Helsinki Heart Study (13) randomly assigned men age 40 to 55 years with elevated non-HDL cholesterol levels to gemfibrozil, 600 mg 2 times per day, or placebo. The mean total cholesterol level was 7.5 mmol/L (290 mg/dL), and mean HDL cholesterol level was 1.23 mmol/L (47.6 mg/dL). In the 135 patients with diabetes, the incidence of CHD at 5 years was 3.4% in the gemfibrozil group and 10.5% in the placebo group. The relative risk was 0.32 (CI, 0.07 to 1.46), and the absolute risk reduction was 0.07 (CI, 0.01 to 0.15). None of these differences were statistically significant (14). The Heart Protection Study (HPS) included data on both primary and secondary prevention in patients with diabetes who were at high risk for cardiovascular disease (15). The objective of this study was to examine the effects of therapy to lower LDL cholesterol level across a broad range of lipid levels and risk factors. The HPS enrolled patients 40 to 80 years of age with nonfasting total cholesterol levels of at least 3.49 mmol/L ( 135 mg/dL). In the primary prevention group, 3982 patients had diabetes. Treatment with simvastatin, 40 mg, led to reduced risks for CHD events (relative risk, 0.74 [CI, 0.64 to 0.85]; absolute risk reduction, 0.05 [CI, 0.03 to 0.07]). The Prospective Study of Pravastatin in the Elderly at Risk (PROSPER) randomly assigned men and women 70 to 82 years of age with a history of cerebral or peripheral vascular disease or risk factors for such disease (such as smoking, hypertension, and diabetes) to pravastatin, 40 mg/d, or placebo (16). In the primary prevention group, 396 patients had diabetes. In these patients, treatment with pravastatin led to a trend toward harm (relative risk, 1.23 [CI, 0.77 to 1.95]; absolute risk reduction, 0.03 [CI, 0.10 to 0.04]). The interaction between diabetes and the treatment group was statistically significant, suggesting that patients with diabetes did substantially worse than those without diabetes. The Anglo-Scandinavian Cardiac Outcome Trial-Lipid Lowering Arm (ASCOT-LLA) randomly assigned patients age 40 to 79 years without CHD but with hypertension and at least 3 other cardiovascular risk factors (left ventricular hypertrophy, other electrocardiographic abnormalities, type 2 diabetes, peripheral arterial disease, previous stroke or transient ischemic attack, male sex, age 55 years, microalbuminuria, proteinuria, smoking, ratio of plasma total to HDL cholesterol of 6 or higher, or family history of premature CHD) to atorvastatin, 10 mg/d, or placebo (17). The diabetes subgroup, 2532 patients who had hypertension and at least 2 other risk factors, had low event rates of 3.6% in the control group and 3.0% in the intervention group. Thus, lipid-lowering treatment, with a relative risk of 0.84 (CI, 0.55 to 1.29) and an absolute risk reduction of 0.006 (CI, 0.008 to 0.019), did not lead to statistically significant improvements in the diabetes group. Secondary Prevention Eight trials reported on secondary prevention in patients with diabetes. The first, the Scandinavian Simvastatin Survival Study (4S), randomly assigned patients with coronary disease to simvastatin, 20 mg, or placebo (18). In a secondary analysis of the 202 patients with diabetes, simvastatin led to large benefits (relative risk for cardiovascular events, 0.50 [CI, 0.33 to 0.76]; absolute risk reduction, 0.23 [CI, 0.10 to 0.35]). Of note is the relatively high event rate in the control group (45%) compared with those seen in other trials. The Cholesterol and Recurrent Events (CARE) trial randomly assigned patients with previous myocardial infarction to pravastatin, 40 mg/d, or placebo (19). Pravastatin improved CHD outcomes in the 586 patients with diabetes (relative risk for cardiovascular events, 0.78 [CI, 0.62 to 0.99]; absolute risk reduction, 0.08 [CI, 0.01 to 0.16]). Results were reported as stratified by baseline LDL cholesterol levels and showed that for th


Journal of General Internal Medicine | 2000

Confidential Clinician‐reported Surveillance of Adverse Events Among Medical Inpatients

Saul N. Weingart; Amy N. Ship; Mark D. Aronson

BACKGROUND: Although iatrogenic injury poses a significant risk to hospitalized patients, detection of adverse events (AEs) is costly and difficult.METHODS: The authors developed a confidential reporting method for detecting AEs on a medicine unit of a teaching hospital. Adverse events were defined as patient injuries. Potential adverse events (PAEs) represented errors that could have, but did not result in harm. Investigators interviewed house officers during morning rounds and by e-mail, asking them to identify obstacles to high quality care and iatrogenic injuries. They compared house officer reports with hospital incident reports and patients’ medical records. A multivariate regression model identified correlates of reporting.RESULTS: One hundred ten events occurred, affecting 84 patients. Queries by e-mail (incidence rate ratio [IRR]=0.16; 95% confidence interval [95% CI], 0.05 to 0.49) and on days when house officers rotated to a new service (IRR=0.12; 95% CI, 0.02 to 0.91) resulted in fewer reports. The most commonly reported process of care problems were inadequate evaluation of the patient (16.4%), failure to monitor or follow up (12.7%), and failure of the laboratory to perform at test (12.7%). Respondents identified 29 (26.4%) AEs, 52 (47.3%) PAEs, and 29 (26.4%) other house officer-identified quality problems. An AE occurred in 2.6% of admissions. The hospital incident reporting system detected only one house officer-reported event. Chart review corroborated 72.9% of events.CONCLUSIONS: House officers detect many AEs among inpatients. Confidential peer interviews of front-line providers is a promising method for identifying medical errors and substandard quality.


Annals of Internal Medicine | 2005

Screening for Hereditary Hemochromatosis: A Clinical Practice Guideline from the American College of Physicians

Amir Qaseem; Mark D. Aronson; Nick Fitterman; Vincenza Snow; Kevin B. Weiss; Douglas K Owens

Recommendations Recommendation 1: There is insufficient evidence to recommend for or against screening for hereditary hemochromatosis in the general population. There is currently insufficient evidence to determine whether the benefits of screening the general population outweigh the risks. The C282Y mutation is prevalent in certain populations, particularly white men, and treatment is not costly nor is it associated with any significant harm. Although patients homozygous for C282Y are more likely to have elevated serum ferritin level and transferrin saturation percentage, there currently is no way of predicting which patients will progress to overt disease. For clinicians who choose to screen, 1-time phenotypic screening of asymptomatic non-Hispanic white men with serum ferritin level and transferrin saturation would have the highest yield (1). Recommendation 2: In case-finding for hereditary hemochromatosis, serum ferritin and transferrin saturation tests should be performed. There is no information available on risk-stratifying in patients with an associated condition or conditions such as type 2 diabetes, cardiac arrhythmias and cardiomyopathies, liver failure, hepatomegaly, cirrhosis, elevated liver enzyme levels, hepatocellular carcinoma, arthritis, hypogonadism, or changes in skin pigmentation. The initial symptoms associated with iron overload might be nonspecific, and the decision to perform tests should be based on clinical judgment regarding what may cause such protean manifestations. If testing is performed for these patients, the cutoff values for serum ferritin level of more than 200 g/L in women or more than 300 g/L in men and transferrin saturation greater than 55% may be used as criteria for case-finding; however, there is no general agreement about diagnostic criteria. Case-finding may also be considered if there is a family history of hereditary hemochromatosis for an individual, as the risk for developing the disease may be higher than that of the general population. Recommendation 3: Physicians should discuss the risks, benefits, and limitations of genetic testing in patients with a positive family history of hereditary hemochromatosis or those with elevated serum ferritin level or transferrin saturation. Before genetic testing, individuals should be made aware of the benefits and risks of genetic testing. This should include discussing available treatment and its efficacy; costs involved (2); and social issues, such as impact of disease labeling, insurability and psychological well-being, and the possibility of as-yet-unknown genotypes associated with hereditary hemochromatosis. Recommendation 4: Further research is needed to establish better diagnostic, therapeutic, and prognostic criteria for hereditary hemochromatosis. The lack of information on the natural history of the disease makes it difficult to manage patients with hereditary hemochromatosis. There are no clearly defined criteria to risk-stratify patients into groups more or less likely to develop overt disease. Future developments in technology and genetic screening might help in the diagnosis and management of hereditary hemochromatosis. In addition, there is a need for more uniform diagnostic criteria. Introduction Hereditary hemochromatosis is a genetic disorder of iron metabolism and is characterized by tissue injury resulting from an abnormal accumulation of iron in various organs. This disease is usually a consequence of an increased absorption of iron from the gastrointestinal tract, which results in increased iron deposition in tissue, particularly in the liver, heart, and pancreas. If left untreated, it can lead to organ damage, such as cirrhosis, as well as hepatocellular cancer. However, early diagnosis of hereditary hemochromatosis is difficult because of variability in the case definition and diagnostic standard used. Diagnosis of hereditary hemochromatosis is usually based on a combination of various genetic or phenotypic criteria. Genetically, it can be based on direct DNA testing for the 2 HFE gene mutations (C282Y and H63D) associated with hereditary hemochromatosis. The mutation of C282Y in the HFE gene on chromosome 6 is present in almost 90% of those affected. Most patients are homozygous, and mutation transmission is autosomal recessive. The H63D mutation may be associated with hereditary hemochromatosis, but the actual clinical effects of this mutation are uncertain (3). Although in a small proportion, compound heterozygotes (C282Y/H63D) can develop iron overload. Phenotypic markers of hereditary hemochromatosis may be used to identify the disease. Percentage of transferrin saturation and serum ferritin level have been used to confirm the diagnosis of hereditary hemochromatosis. Transferrin saturation determines how much iron is bound to the protein that carries iron in the blood. Serum ferritin level is elevated in patients with hereditary hemochromatosis and correlates with liver iron and development of cirrhosis. Liver biopsy to measure hepatic iron concentration by staining is considered the gold standard to test for hereditary hemochromatosis. However, with the advent of genetic testing, liver biopsy is not widely used to confirm the diagnosis. There is a consensus on the various diagnostic tests that could be used to diagnose hereditary hemochromatosis. However, the threshold levels that should be used to define the disease remain controversial. On the basis of the review of the background paper by Schmitt and colleagues (4), also in this issue, and considering that lower cutoffs are more sensitive and less specific, serum ferritin level greater than 200 g/mL and transferrin saturation greater than 55% suggest an increased risk for hereditary hemochromatosis and the need for further investigation (5). Hereditary hemochromatosis is the most common recessive genetic trait in white persons. However, estimating the prevalence of this disease is difficult. Genetic testing of populations originating in northern Europe showed that approximately 0.5% are homozygous for the C282Y mutation (6). The Hemochromatosis and Iron Overload Screening (HEIRS) Study showed that the prevalence of C282Y homozygotes was highest among non-Hispanic white persons (0.44% [95% CI, 0.42% to 0.47%]) (1). Phenotypic screening of the population in the United States demonstrated that 1% to 6% have elevated transferrin saturation and 11% to 22% of this group have an increased serum ferritin level (7). Hereditary hemochromatosis has been estimated to be present in 3 to 5 people per 1000 in the general population (8). Decisions regarding screening are difficult because of the variable penetrance of mutations of the HFE gene and the absence of any definitive trials addressing the benefits and risks of therapeutic phlebotomy in asymptomatic patients or those with only laboratory abnormalities. The purpose of this guideline is to increase physician awareness of hereditary hemochromatosis, particularly the variable penetrance of genetic mutations; aid in case finding; and explain the role of genetic testing. The target audience for this guideline is internists and other primary care physicians. The target patient population is all persons who have a probability or susceptibility of developing hereditary hemochromatosis, including the relatives of individuals who already have the disease. This guideline is based on the systematic review of the evidence in the background paper (4). This guideline attempts to answer the following questions: 1) What is the prevalence of hereditary hemochromatosis in the primary care setting? 2) In asymptomatic patients with hereditary hemochromatosis, what is the risk for end-organ damage or death? 3) How diagnostically useful are transferrin saturation and serum ferritin in identifying patients with hereditary hemochromatosis in the primary care setting? 4) Is phlebotomy efficacious in reducing morbidity or fatal complications in asymptomatic patients with hereditary hemochromatosis? 5) Do the benefits of screening primary care patients for hereditary hemochromatosis outweigh the risks? Prevalence Estimates of the prevalence of hereditary hemochromatosis in the general population vary widely because no set criteria define what constitutes hereditary hemochromatosis (5, 9, 10). Some argue that genotyping should be used as the gold standard and that the sensitivity and specificity of phenotyping should be calculated and compared with those of genotyping. Others support the use of persistently elevated serum ferritin level and percentage of transferrin saturation as the case definition of hereditary hemochromatosis. Studies of differing populations, using strict criteria recommended in the HEIRS Study (11), have estimated that the prevalence of hereditary hemochromatosis ranges from 1 in 357 persons to 1 in 625 persons in the general population to rates almost as high as 1 in 135 persons among Norwegian men (4). The Table lists various studies showing the prevalence of hereditary hemochromatosis in primary care settings. Table. Prevalence of Hereditary Hemochromatosis in Primary Care Settings Risk for Complications in Asymptomatic Patients Asymptomatic individuals are patients in the latent phase of hereditary hemochromatosis who were incidentally identified. These persons have not yet shown any signs or symptoms related to the disease. Although clinical manifestations associated with hereditary hemochromatosis are influenced by age, sex, diet, and other unknown factors, it is imperative to know the path of disease progression for treatment of the disease. Clinical outcomes that can be associated with hereditary hemochromatosis are cirrhosis, hepatocellular carcinoma, type 2 diabetes, congestive heart failure, arthritis, hypogonadism in males, and even death. However, most persons with the mutated gene remain asymptomatic. The literature that discusses the relationship between biochemical primary iron overload (


Journal of General Internal Medicine | 2001

A Physician-based Voluntary Reporting System for Adverse Events and Medical Errors

Saul N. Weingart; Lawrence D. Callanan; Amy N. Ship; Mark D. Aronson

AbstractOBJECTIVE: To create a voluntary reporting method for identifying adverse events (AEs) and potential adverse events (PAEs) among medical inpatients. DESIGN: Medical house officers asked their peers about obstacles to care, injuries or extended hospitalizations, and problems with medications that affected their patients. Two independent reviewers coded event narratives for adverse outcomes, responsible parties, preventability, and process problems. We corroborated house officers’ reports with hospital incident reports and conducted a retrospective chart review. SETTING: The cardiac step-down, oncology, and medical intensive care units of an urban teaching hospital. INTERVENTION: Structured confidential interviews by postgraduate year-2 and -3 medical residents of interns during work rounds. MEASUREMENTS AND MAIN RESULTS: Respondents reported 88 events over 3 months. AEs occurred among 5 patients (0.5% of admissions) and PAEs among 48 patients (4.9% of admissions). Delayed diagnoses and treatments figured prominently among PAEs (54%). Clinicians were responsible for the greatest number of incidents (55%), followed by workers in the laboratory (11%), radiology (15%), and pharmacy (3%). Respondents identified a variety of problematic processes of care, including problems with diagnosis (16%), therapy (26%), and failure to provide clinical and support services (29%). We corroborated 84% of reported events in the medical record. Participants found voluntary peer reporting of medical errors unobtrusive and agreed that it could be implemented on a regular basis. CONCLUSIONS: A physician-based voluntary reporting system for medical errors is feasible and acceptable to front-line clinicians.


Journal of General Internal Medicine | 2004

Creating a Quality Improvement Elective for Medical House Officers

Saul N. Weingart; Anjala V. Tess; Jeffrey Driver; Mark D. Aronson; Kenneth Sands

The Accreditation Council on Graduate Medical Education (ACGME) requires that house officers demonstrate competencies in “practice-based learning and improvement” and in “the ability to effectively call on system resources to provide care that is of optimum value.” Anticipating this requirement, faculty at a Boston teaching hospital developed a 3-week elective for medical house officers in quality improvement (QI).The objectives of the elective were to enhance residents’ understanding of QI concepts, their familiarity with the hospital’s QI infrastructure, and to gain practical experience with root-cause analysis and QI initiatives. Learners participated in three didactic seminars, joined hospital-based QI activities, conducted a root-cause analysis, and completed a QI project under the guidance of a faculty mentor.The elective enrolled 26 residents in 3 years. Sixty-three percent of resident respondents said that the elective increased their understanding of QI in health care; 88% better understood QI in their own institution.


Annals of Internal Medicine | 2003

Diagnosis and management of adults with pharyngitis. A cost-effectiveness analysis.

Joan M. Neuner; Mary Beth Hamel; Russell S. Phillips; Kira Bona; Mark D. Aronson

Pharyngitis is a common and costly condition in adults. The National Ambulatory Medical Care Survey estimated that 18 million patients sought care for a sore throat in the United States in 1996, making it the sixth leading cause of visits to physicians (1). As many as four to six times more individuals may not seek care for a sore throat (2, 3). Many organisms cause sore throat. Chief among them are group A -hemolytic streptococcus (GAS), nongroup A streptococcus, Mycoplasma pneumoniae, Chlamydia pneumoniae, and several respiratory viruses (4). With rare exceptions, such as with Neisseria gonorrhoeae infection or the acute antiretroviral syndrome, no compelling data indicate treatment for patients with pharyngitis not caused by group A streptococcus (5). Nevertheless, although only about 10% of adults with pharyngitis seen in primary care settings have group A streptococcal infection (6), 75% of patients seen by physicians receive antibiotics (7). The potential morbidity of both allergic reactions and antibiotic resistance must be considered in decisions about management of pharyngitis (8, 9). Thus far, GAS has remained sensitive to penicillin, which therefore remains the recommended treatment (10). However, despite expert recommendations, physicians prescribe broad-spectrum antibiotics to 70% to 75% of adults (7, 11). Widespread resistance to macrolides has already been documented in GAS (12-14). Evidence for the effectiveness of GAS treatment has also become less compelling in recent years. Acute rheumatic fever, a sequelae of GAS pharyngitis, has become exceedingly rare in adults in industrial societies outside of sporadic outbreaks (15-17); as a result, prevention of that illness is not an important rationale for treatment. Little evidence suggests that treatment prevents glomerulonephritis (18-20). Pharyngitis treatment does shorten symptom duration and reduce the risk for infectious sequelae (21, 22), but the clinical significance of these benefits continues to be argued (22). Clinicians have several tools to determine whether a patient with pharyngitis is likely to have GAS. Rapid diagnostic assays with excellent operating characteristics are available (23-33). Furthermore, clinical criteria or decision rules can help clinicians predict the likelihood of a positive throat culture (6, 34); a recent systematic review and clinical guideline (35, 36) recommended several strategies for diagnosis and management of pharyngitis based on one such decision rule (34). Cost-effectiveness and decision analyses incorporating medical costs are useful in assessing management strategies when no definitive randomized clinical trials have compared these strategies (37). We performed a costutility analysis to examine five common strategies for testing and treatment in pharyngitis care. We also examined the effect of a decision rule (34) on those strategies. Methods Decision Analytic Model We developed a decision model [Appendix Figure 1] to evaluate common strategies for managing adult patients with pharyngitis. We constructed this model to examine the short-term cost-effectiveness of five strategies: 1) observation onlyneither test nor treat [observation]; 2) empirical antibiotic treatment of all patients without any testing [empirical therapy]; 3) throat culture for all patients, with antibiotic treatment for positive results [culture]; 4) optical immunoassay (OIA) followed by culture to confirm negative OIA test result only, with antibiotic treatment for positive results on either test [OIA/culture]; 5) OIA alone for all patients, with antibiotic treatment for positive results (OIA alone). Our model examines several possible outcomes of pharyngitis, and we discuss the probabilities of each in the following section. In brief, we examined the effect of the preceding strategies for diagnosis or treatment with a 10-day course of penicillin (with erythromycin substituted in case of an allergic reaction to penicillin [10, 35, 36, 38, 39]) on each of four outcomes: acute rheumatic fever, peritonsillar abscess, duration of symptoms, and allergic reactions to antibiotics. All outcomes were appropriately treated, and the costs and effects of treatment were included in our model. We made several simplifying conditions in creating our decision model. We considered only patients without a history of acute rheumatic fever or glomerulonephritis. Because a patient with a history of penicillin allergy would not receive penicillin and therefore would have no risk for allergic reaction, such patients were not included in our base-case model. We assumed that no patient would develop acute rheumatic fever with another complication (abscess or allergic reaction) and that patient adherence and follow-up (including ability to contact patients with culture results) were 100%. Finally, we assumed that all tests were done in an on-site reference laboratory; we did not consider the cost of transporting specimens for either culture or OIA, and we assumed that OIA results would be available before the patient left the office. In accordance with recent recommendations by an expert panel (40), the base-case analysis takes the societal perspective. We considered all outcomes and direct costs incurred within the first year of diagnosis except (as recommended for base-case analyses using quality-adjusted life-years [QALYs]) for costs such as work lost because of short-term illness (40, 41). These losses are assumed to be included in the decreased preference for illness, estimated as part of the utility for short-term illness. Three studies of adult pharyngitis that examined work days lost (42-44) did not find a significant difference between lost work days in patients treated and those not treated with penicillin; therefore, inclusion of lost productivity costs would probably not have affected our results appreciably. We limited our analysis to the first year after diagnosis. Most of the costs associated with GAS pharyngitis occur within the first several weeks. A few patients will have late complications, such as rheumatic valve deformities, and will require treatments such as heart valve replacement 20 or more years after their episode of pharyngitis. Because these complications are rare and because discounting would eliminate most of these downstream costs, we joined pediatric investigators in limiting our analysis to health care costs incurred in the first year (45, 46). The model output was quality-adjusted loss of life expectancy, measured as quality-adjusted life-days. Incremental cost-effectiveness analyses were performed by rank ordering all five competing strategies by increasing effectiveness, then calculating incremental cost-effectiveness strategies for each strategy (Appendix). All analyses were performed by using a decision analysis software program (DATA, versions 3.5 and 4.0, TreeAge, Williamstown, Massachusetts). Data Sources We searched the published literature for probabilities, utilities, and costs, as described in the following section (and in more detail in the Appendix). The Clinical Examination We examined the incorporation of the clinical examination into our strategies for management of pharyngitis. A recent systematic review of the clinical examination in adult pharyngitis (47) found that no individual element of the history or physical examination for a patient with pharyngitis is accurate enough to diagnose streptococcal pharyngitis (Appendix). However, several clinical prediction rules have combined key findings as a tool in predicting the probability of sore throat in adults (6, 34, 48-50). The pharyngitis decision rule by Centor and colleagues (34) (Appendix Figure 2) is the only rule validated in several populations (47, 51-53). It is based on four clinical findings (tonsillar exudates, tender anterior cervical lymphadenopathy, absence of cough, and history of fever); each risk factor is weighted equally to give a score of 0 to 4 points. It can then be used as a likelihood ratio by applying it to a population with a known GAS pharyngitis prevalence (such as the patients seen in a practice) to determine the individual patients probability of GAS pharyngitis. Because this new probability estimate can be considered a prevalence of GAS pharyngitis for an individual patient, we examined the incorporation of the decision rule into our strategies (as described in the Results section under the heading Application of a Clinical Decision Rule). Prevalence of GAS Pharyngitis The prevalence of GAS pharyngitis in adults, defined as the proportion of throat cultures that grow GAS, varies between 5% and 26% in primary care and emergency department settings (6, 34, 54). It can also vary with the season of the year, exposure to children, and other factors (10). On the basis of a study done in Boston, Massachusetts, we used a GAS pharyngitis prevalence of 10% (6) for our model (Table 1). Table 1. Baseline Probabilities, Utilities, and Costs for Cost-Effectiveness Analysis of Management of Group A -Hemolytic Streptococcal Pharyngitis GAS Test Characteristics We modeled two-plate culture in the reference laboratory as the gold standard with 100% sensitivity and specificity. Although this is not an ideal gold standard, other possibilities (such as antibody titers) cannot be obtained when a treatment decision must be made. Culture is therefore generally considered the criterion standard (10, 46, 55, 56). We identified studies of rapid antigen testing in September 2000 using the MEDLINE subject heading terms pharyngitis and streptococcal infections, diagnosis and found that most studies of OIA were of relatively good quality and used similar gold standards. Therefore, we averaged the sensitivity findings of the studies of OIA with weighting for the number of patients in each study to estimate an overall sensitivity of 0.884 (23-33) and specificity of 0.944 (23-33). We incorporated these test characteristics into our model by using Bayesian a

Collaboration


Dive into the Mark D. Aronson's collaboration.

Top Co-Authors

Avatar

Naama Neeman

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Roger B. Davis

Beth Israel Deaconess Medical Center

View shared research outputs
Top Co-Authors

Avatar

Daniel A. Leffler

Beth Israel Deaconess Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Vincenza Snow

American College of Physicians

View shared research outputs
Top Co-Authors

Avatar

Alexander R. Carbo

Beth Israel Deaconess Medical Center

View shared research outputs
Top Co-Authors

Avatar

Amir Qaseem

American College of Physicians

View shared research outputs
Top Co-Authors

Avatar

Anthony L. Komaroff

Brigham and Women's Hospital

View shared research outputs
Researchain Logo
Decentralizing Knowledge