Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael J. Kallan is active.

Publication


Featured researches published by Michael J. Kallan.


Critical Care Medicine | 2013

Benchmarking the incidence and mortality of severe sepsis in the United States.

David F. Gaieski; J. Matthew Edwards; Michael J. Kallan; Brendan G. Carr

Background:In 1992, the first consensus definition of severe sepsis was published. Subsequent epidemiologic estimates were collected using administrative data, but ongoing discrepancies in the definition of severe sepsis produced large differences in estimates. Objectives:We seek to describe the variations in incidence and mortality of severe sepsis in the United States using four methods of database abstraction. We hypothesized that different methodologies of capturing cases of severe sepsis would result in disparate estimates of incidence and mortality. Design, Setting, Participants:Using a nationally representative sample, four previously published methods (Angus et al, Martin et al, Dombrovskiy et al, and Wang et al) were used to gather cases of severe sepsis over a 6-year period (2004–2009). In addition, the use of new International Statistical Classification of Diseases, 9th Edition (ICD-9), sepsis codes was compared with previous methods. Measurements:Annual national incidence and in-hospital mortality of severe sepsis. Results:The average annual incidence varied by as much as 3.5-fold depending on method used and ranged from 894,013 (300/100,000 population) to 3,110,630 (1,031/100,000) using the methods of Dombrovskiy et al and Wang et al, respectively. Average annual increase in the incidence of severe sepsis was similar (13.0% to 13.3%) across all methods. In-hospital mortality ranged from 14.7% to 29.9% using abstraction methods of Wang et al and Dombrovskiy et al. Using all methods, there was a decrease in in-hospital mortality across the 6-year period (35.2% to 25.6% [Dombrovskiy et al] and 17.8% to 12.1% [Wang et al]). Use of ICD-9 sepsis codes more than doubled over the 6-year period (158,722 – 489,632 [995.92 severe sepsis], 131,719 – 303,615 [785.52 septic shock]). Conclusion:There is substantial variability in incidence and mortality of severe sepsis depending on the method of database abstraction used. A uniform, consistent method is needed for use in national registries to facilitate accurate assessment of clinical interventions and outcome comparisons between hospitals and regions.


Biological Psychiatry | 2004

Lack of efficacy of estradiol for depression in postmenopausal women: a randomized, controlled trial

Mary F. Morrison; Michael J. Kallan; Thomas R. Ten Have; Ira R. Katz; Kathryn Tweedy; Michelle Battistini

BACKGROUND Estrogen has been considered as a potential antidepressant in postmenopausal women. Our goal was to study whether estrogen therapy is effective in treating depressive disorders in older postmenopausal women and to determine whether progestins are associated with a deterioration of mood. METHODS After 2 weeks of single-blind placebo treatment in 87 patients, 57 were randomly assigned to receive 8 weeks of treatment with estradiol (.1 mg/day; n = 31) or placebo (n = 26). All patients were then treated with medroxyprogesterone 10 mg/day for 2 weeks combined with the study patch. Depressive symptoms were rated with the 21-item Hamilton Depression and Center for Epidemiologic Studies Depression scales. RESULTS A clinically significant antidepressant effect of estradiol was excluded after 8 weeks of estradiol treatment. The estradiol group and placebo group improved in depressive symptoms at a similar rate based on the Hamilton Depression Scale (40% decreases in depression for estradiol vs. 44% for placebo). No significant increase in depressive symptoms was demonstrated with the use of progestins; however, positive affect decreased slightly with the use of combined estradiol-medroxyprogesterone compared with medroxyprogesterone alone (5.8%, p =.027). CONCLUSIONS Estradiol cannot be considered as an effective treatment in postmenopausal women with mild to moderate depression.


Annals of Internal Medicine | 2014

Hepatic Decompensation in Antiretroviral-Treated Patients Co-Infected With HIV and Hepatitis C Virus Compared With Hepatitis C Virus–Monoinfected Patients: A Cohort Study

Vincent Lo Re; Michael J. Kallan; Janet P. Tate; A. Russell Localio; Joseph K. Lim; Matthew Bidwell Goetz; Marina B. Klein; David Rimland; Maria C. Rodriguez-Barradas; Adeel A. Butt; Cynthia L. Gibert; Sheldon T. Brown; Lesley S. Park; Robert Dubrow; K. Rajender Reddy; Jay R. Kostman; Brian L. Strom; Amy C. Justice

Context Patients with HIV are often co-infected with hepatitis C virus (HCV). Whether treatment of HIV with antiretroviral therapy (ART) can improve HCV outcomes is a topic of interest. Contribution In a Veterans Affairs study, patients co-infected with HIV and HCV who had HIV RNA levels less than 1000 copies/mL had a lower rate of hepatic decompensation than those with less HIV suppression. However, the rate was still higher than that in HCV-monoinfected patients. Caution Few women were studied. Implication Patients co-infected with HIV and HCV remain at greater risk for poor outcomes from HCV infection than HCV-monoinfected patients despite viral suppression by ART. The Editors Co-infection with chronic hepatitis C virus (HCV) occurs in 10% to 30% of HIV-infected patients (14). The course of chronic HCV is accelerated in patients co-infected with HIV, with more rapid progression of liver fibrosis than in HCV-monoinfected patients (57). Consequently, HCV-related liver complications, particularly hepatic decompensation (defined by the presence of ascites, spontaneous bacterial peritonitis, variceal hemorrhage, or hepatic encephalopathy [8]), have emerged as important causes of illness in co-infected patients (9, 10). Despite the importance of HCV-related end-stage liver disease, few longitudinal studies have evaluated the incidence and determinants of hepatic decompensation among patients co-infected with HIV and HCV during the antiretroviral therapy (ART) era. Previous studies suggest that ART slows progression of HCV-associated liver fibrosis, possibly by reducing HIV-related inflammation and immune dysfunction and inhibiting the ability of HIV to directly infect hepatocytes (1013). However, whether rates of hepatic decompensation and other severe liver events (for example, hepatocellular carcinoma [HCC] or liver-related death) in co-infected patients receiving ART are similar to those in HCV-monoinfected patients remains unclear. Furthermore, the determinants of hepatic decompensation among co-infected patients receiving ART are unknown. Determination of these factors could help define the mechanisms of decompensation in co-infected patients and could suggest interventions to reduce the risk for end-stage liver disease in this population. We first compared the incidence of hepatic decompensation between antiretroviral-treated patients co-infected with HIV and HCV and HCV-monoinfected patients. We hypothesized that rates of decompensation would remain higher in co-infected patients despite ART. We then evaluated host and viral factors associated with decompensation among co-infected patients. Methods Study Design and Data Source We conducted a retrospective cohort study among antiretroviral-treated patients co-infected with HIV and HCV and HCV-monoinfected patients in the VACS-VC (Veterans Aging Cohort Study Virtual Cohort) between 1 January 1997 and 30 September 2010 (14). The VACS-VC consists of electronic medical record data from HIV-infected patients receiving care at Veterans Affairs (VA) medical facilities across the United States. Each HIV-infected patient is matched on age, sex, race/ethnicity, and site to 2 HIV-uninfected persons. Data include hospital and outpatient diagnoses (recorded using International Classification of Diseases, Ninth Revision [ICD-9], codes), procedures (recorded using CPT [Current Procedural Terminology] codes), laboratory results, and pharmacy data. Clinically confirmed cancer diagnoses are available from the VA Central Cancer Registry. Deaths are identified from the VA Vital Status file, which uses data from the Social Security Death Master File, Medicare Vital Status Files, and VA Beneficiary Identification and Records Locator Subsystem. For patients who died, principal cause of death can be determined by linkage with the National Death Index (15). In addition, U.S. Medicare and Medicaid claims data are available for veterans also enrolled in these programs and have been merged with VACS-VC data. Study Patients Co-infected patients were included if they had detectable HCV RNA, had recently initiated ART (defined as use of 3 antiretrovirals from 2 classes [16] or 3 nucleoside analogues [a previously accepted ART regimen] [17]) within the VA system, had an HIV RNA level greater than 500 copies/mL within 180 days before starting ART (to identify those who newly initiated ART [18]), and had been observed for at least 12 months in the VACS-VC after starting ART. Monoinfected patients had detectable HCV RNA, no recorded HIV ICD-9 diagnosis or antiretroviral prescriptions, and at least 12 months of observation in the VACS-VC. Patients were excluded if, during the baseline period (defined in the Statistical Analysis section), they had hepatic decompensation, HCC, or liver transplantation or received interferon-based HCV therapy (because treatment reduces the risk for hepatic decompensation [19, 20]). Study Outcomes The primary outcome was incident hepatic decompensation, which was defined by 1 ICD-9 diagnosis of ascites, spontaneous bacterial peritonitis, or esophageal variceal hemorrhage at hospital discharge or 2 such outpatient diagnoses in the VACS-VC (Supplement 1). A prior study validated this determination, with 91% of events confirmed by medical records (21). The requirement of 2 outpatient diagnoses aimed to exclude events that were suspected but not subsequently confirmed at follow-up visits. On the basis of the results of the prior validation study (21), we did not include ICD-9 diagnoses for hepatic encephalopathy and jaundice, which could indicate decompensation, because these diagnoses frequently were linked to unrelated conditions (for example, narcotic overuse, stroke recorded as encephalopathy, or biliary obstruction or atazanavir-associated hyperbilirubinemia recorded as jaundice). Date of decompensation was defined as the hospital discharge date (if identified by hospital diagnosis) or initial outpatient diagnosis date (if identified by outpatient diagnosis). Supplement 1. ICD-9, ICD-10, and CPT Codes Secondary outcomes included incident hepatic decompensation (determined by the aforementioned ICD-9based definition) within the VACS-VC, Medicare, or Medicaid data (to capture outcomes occurring at non-VA hospitals that did not result in transfer to a VA facility; this outcome was secondary because non-VA events have not been validated); HCC; and severe liver events, a composite outcome of hepatic decompensation within the VACS-VC, HCC, or liver-related death. Hepatocellular carcinoma was determined using the VA Central Cancer Registry, which confirmed diagnoses by histologic or cytologic evaluation or consistent radiography. We classified deaths as liver-related if the underlying cause from the National Death Index was recorded as hepatic decompensation, liver cancer, alcoholic liver disease, viral hepatitis, or nonalcoholic liver disease (Supplement 1) (15). Data Collection Baseline data (Table 1) included age, sex, race/ethnicity, VA center patient volume, body mass index (BMI), diabetes mellitus, alcohol dependence or abuse, injection or noninjection drug use, hepatitis B surface antigen status, HCV genotype, HCV RNA level, pre-ART CD4 cell count, pre-ART plasma HIV RNA level, and baseline antiretroviral regimen. Diabetes was defined as a random glucose level of at least 200 mg/dL or antidiabetic medication use (22, 23). Alcohol dependence or abuse (24) and injection or noninjection drug use (24, 25) were defined by previously validated ICD-9 diagnoses (Supplement 1). Baseline serum creatinine, hemoglobin, alanine aminotransferase (ALT), and aspartate aminotransferase (AST) levels and platelet count were collected from dates closest to but before the start of follow-up. Baseline FIB-4 score, a noninvasive measure of advanced hepatic fibrosis, was determined as follows: [age in yearsAST level in U/L] / [(platelet count in109/L)(ALT level in U/L)1/2] (26). Because liver fibrosis can progress by 1 stage as early as within 4 years for antiretroviral-treated patients co-infected with HIV and HCV (7) and within 5 years for HCV-monoinfected persons (27), we determined baseline FIB-4 scores by using ALT levels, AST levels, and platelet counts within a 2-year period around the start of follow-up. Scores less than 1.45 indicate no or minimal fibrosis, and scores greater than 3.25 indicate advanced hepatic fibrosis or cirrhosis in co-infected (26) and HCV-monoinfected patients (28). Table 1. Characteristics of the Study Cohorts Longitudinal data included hepatitis B surface antigen status, plasma HIV RNA level, diabetes, and liver transplantation (determined by diagnosis and procedural codes) (Supplement 1). Statistical Analysis The 12 months before the start of follow-up represented the baseline period for both cohorts. Follow-up began 12 months after ART initiation for co-infected patients and after 12 months in the VACS-VC for monoinfected patients. The rationale for defining the baseline period as the first year of receipt of ART for co-infected patients was that many of these patients initially entered care at the time of ART initiation, which was shortly after their HIV diagnosis. Follow-up continued until a study end point, death, initiation of HCV therapy, or the last visit before 30 September 2010, whichever came first. For descriptive purposes, we estimated incidence rates (events per 1000 person-years) of end points with 95% CIs, standardized by the age and race/ethnicity distribution of co-infected patients (29). We then used Cox models to estimate adjusted hazard ratios (HRs) for outcomes in co-infected compared with monoinfected patients (30). We controlled for all available clinically relevant variables in Table 1. The proportionality of hazards was evaluated by plots of Schoenfeld residuals (31). In a sensitivity analysis, we addressed the potential for informative censoring by using inverse probability of censoring weights and Cox regression (Supplement


Pediatrics | 2009

Effectiveness of Belt Positioning Booster Seats: An Updated Assessment

Kristy B. Arbogast; Jessica Steps Jermakian; Michael J. Kallan; Dennis R. Durbin

OBJECTIVE: The objective of this study was to provide an updated estimate of the effectiveness of belt-positioning booster (BPB) seats compared with seat belts alone in reducing the risk for injury for children aged 4 to 8 years. METHODS: Data were collected from a longitudinal study of children who were involved in crashes in 16 states and the District of Columbia from December 1, 1998, to November 30, 2007, with data collected via insurance claims records and a validated telephone survey. The study sample included children who were aged 4 to 8 years, seated in the rear rows of the vehicle, and restrained by either a seat belt or a BPB seat. Multivariable logistic regression was used to determine the odds of injury for those in BPB seats versus those in seat belts. Effects of crash direction and booster seat type were also explored. RESULTS: Complete interview data were obtained on 7151 children in 6591 crashes representing an estimated 120646 children in 116503 crashes in the study population. The adjusted relative risk for injury to children in BPB seats compared with those in seat belts was 0.55. CONCLUSIONS: This study reconfirms previous reports that BPB seats reduce the risk for injury in children aged 4 through 8 years. On the basis of these analyses, parents, pediatricians, and health educators should continue to recommend as best practice the use of BPB seats once a child outgrows a harness-based child restraint until he or she is at least 8 years of age.


Accident Analysis & Prevention | 2004

An evaluation of the effectiveness of forward facing child restraint systems

Kristy B. Arbogast; Dennis R. Durbin; Rebecca A. Cornejo; Michael J. Kallan; Flaura Koplin Winston

The objective of this study was to determine the effectiveness of forward facing child restraint systems (FFCRS) in preventing serious injury and hospitalization to children 12-47 months of age as compared with similar age children in seat belts. Data were obtained from a cross-sectional study of children aged 12-47 months in crashes of insured vehicles in 15 states, with data collected via insurance claims records and a telephone survey. Effectiveness estimates were limited to those children between 12 and 47 months of age seated in the back row(s) of vehicles, restrained in FFCRS, regardless of misuse, or seat belts of all types and usage. Completed survey information was obtained on 1207 children, representing 12632 children in 11619 crashes between 1 December 1998 and 31 May 2002. Serious injuries occurred to 0.47% of all 12-47-month olds studied, including 1.72% of those in seat belts and 0.39% of those in child restraint systems. The risk of serious injury was 78% lower for children in FFCRS than in seat belts (odds ratio (OR) = 0.22, 95% confidence interval (CI) = 0.11-0.45, P = 0.001). The risk of hospitalization was 79% lower for children in FFCRS than in seat belts (OR = 0.21, 95% CI = 0.09-050, P = 0.001). There was no difference between the restraint types in preventing minor injuries. As compared with seat belts, CRS are very highly effective in preventing serious injuries and hospitalization, respectively. This effectiveness estimate is substantially higher than older estimates, demonstrating the benefits of current CRS designs. These results provide those educating parents and caregivers population-based data on the importance of child restraint use.


Accident Analysis & Prevention | 2011

Prevalence of teen driver errors leading to serious motor vehicle crashes

Allison E. Curry; Jessica Hafetz; Michael J. Kallan; Flaura Koplin Winston; Dennis R. Durbin

OBJECTIVES Motor vehicle crashes are the leading cause of adolescent deaths. Programs and policies should target the most common and modifiable reasons for crashes. We estimated the frequency of critical reasons for crashes involving teen drivers, and examined in more depth specific teen driver errors. METHODS The National Highway Traffic Safety Administrations (NHTSA) National Motor Vehicle Crash Causation Survey collected data at the scene of a nationally representative sample of 5470 serious crashes between 7/05 and 12/07. NHTSA researchers assigned a single driver, vehicle, or environmental factor as the critical reason for the event immediately leading to each crash. We analyzed crashes involving 15-18 year old drivers. RESULTS 822 teen drivers were involved in 795 serious crashes, representing 335,667 teens in 325,291 crashes. Driver error was by far the most common reason for crashes (95.6%), as opposed to vehicle or environmental factors. Among crashes with a driver error, a teen made the error 79.3% of the time (75.8% of all teen-involved crashes). Recognition errors (e.g., inadequate surveillance, distraction) accounted for 46.3% of all teen errors, followed by decision errors (e.g., following too closely, too fast for conditions) (40.1%) and performance errors (e.g., loss of control) (8.0%). Inadequate surveillance, driving too fast for conditions, and distracted driving together accounted for almost half of all crashes. Aggressive driving behavior, drowsy driving, and physical impairments were less commonly cited as critical reasons. Males and females had similar proportions of broadly classified errors, although females were specifically more likely to make inadequate surveillance errors. CONCLUSIONS Our findings support prioritization of interventions targeting driver distraction and surveillance and hazard awareness training.


JAMA Internal Medicine | 2009

Trial of Family and Friend Support for Weight Loss in African American Adults

Shiriki Kumanyika; Thomas A. Wadden; Justine Shults; Jennifer E. Fassbender; Stacey D. Brown; Marjorie A. Bowman; Vivian Brake; William West; Johnetta Frazier; Melicia C. Whitt-Glover; Michael J. Kallan; Emily Desnouee; Xiaoying Wu

BACKGROUND Family and friend participation may provide culturally salient social support for weight loss in African American adults. METHODS SHARE (Supporting Healthy Activity and eating Right Everyday) was a 2-year trial of a culturally specific weight loss program. African American women and men who enrolled alone (individual stratum, 63 index participants) or together with 1 or 2 family members or friends (family stratum, 130 index participants) were randomized, within strata, to high or low social support treatments; 90% were female. RESULTS At 6 months, the family index participants lost approximately 5 to 6 kg; the individual index participants lost approximately 3 to 4 kg. The mean weight change was not different in high vs low social support in either stratum and generally not when high or low support treatments were compared across strata. The overall intention-to-treat mean weight change at 24 months was -2.4 kg (95% confidence interval, -3.3 kg to -1.5 kg). The family index participant weight loss was greater among the participants whose partners attended more personally tailored counseling sessions at 6 months in the high-support group and at 6, 12, and 24 months in the low-support group (all P < .05). Also, in the 6-month intention-to-treat analysis, the percentage of weight loss of the family index participants was greater if partners lost at least 5% vs less than 5% of their baseline weight (respectively, -6.1% vs -2.9% [P = .004], high support; and -6.1% vs -3.1% [P = .01], low support). CONCLUSIONS Being assigned to participate with family members, friends, or other group members had no effect on weight change. Enrolling with others was associated with greater weight loss only when partners participated more and lost more weight. TRIAL REGISTRATION clinicaltrials.gov Identifier: NCT00146081.


Cancer Epidemiology, Biomarkers & Prevention | 2010

Risk Factors for Lymphedema after Breast Cancer Treatment

Sandra A. Norman; A. Russell Localio; Michael J. Kallan; Anita L. Weber; Heather A. Simoes Torpey; Sheryl L. Potashnik; Linda T. Miller; Kevin Fox; Angela DeMichele; Lawrence J. Solin

Background: As cancer treatments evolve, it is important to reevaluate their effect on lymphedema risk in breast cancer survivors. Methods: A population-based random sample of 631 women from metropolitan Philadelphia, Pennsylvania, diagnosed with incident breast cancer in 1999 to 2001, was followed for 5 years. Risk factor information was obtained by questionnaire and medical record review. Lymphedema was assessed with a validated questionnaire. Using Cox proportional hazards models, we estimated the relative incidence rates [hazard ratios (HR)] of lymphedema with standard adjusted multivariable analyses ignoring interactions, followed by models including clinically plausible treatment interactions. Results: Compared with no lymph node surgery, adjusted HRs for lymphedema were increased following axillary lymph node dissection [ALND; HR, 2.61; 95% confidence interval (95% CI), 1.77-3.84] but not sentinel lymph node biopsy (SLNB; HR, 1.04; 95% CI, 0.58-1.88). Risk was not increased following irradiation [breast/chest wall only: HR, 1.18 (95% CI, 0.80-1.73); breast/chest wall plus supraclavicular field (+/− full axilla): HR, 0.86 (95% CI, 0.48-1.54)]. Eighty-one percent of chemotherapy was anthracycline based. The HR for anthracycline chemotherapy versus no chemotherapy was 1.46 (95% CI, 1.04-2.04), persisting after stratifying on stage at diagnosis or number of positive nodes. Treatment combinations involving ALND or chemotherapy resulted in approximately 4- to 5-fold increases in HRs for lymphedema [e.g., HR of 4.16 (95% CI, 1.32-12.45) for SLNB/chemotherapy/no radiation] compared with no treatment. Conclusion: With standard multivariable analyses, ALND and chemotherapy increased lymphedema risk whereas radiation therapy and SLNB did not. However, risk varied by combinations of exposures. Impact: Treatment patterns should be considered when counseling and monitoring patients for lymphedema. Cancer Epidemiol Biomarkers Prev; 19(11); 2734–46. ©2010 AACR.


Journal of the American Geriatrics Society | 2002

Association between medical comorbidity and treatment outcomes in late-life depression.

David W. Oslin; Catherine J. Datto; Michael J. Kallan; Ira R. Katz; William S. Edell; Thomas TenHave

OBJECTIVES: Previous studies have demonstrated an association between major depression and physical disability in late life. The objectives of this study were to examine the relationship between specific medical illnesses and the outcomes of treatment for late‐life depression.


Annals of Surgery | 2004

Optimal Restraint Reduces the Risk of Abdominal Injury in Children Involved in Motor Vehicle Crashes

Michael L. Nance; Nicolas Lutz; Kristy B. Arbogast; Rebecca A. Cornejo; Michael J. Kallan; Flaura Koplin Winston; Dennis R. Durbin

Background:The American Academy of Pediatrics has established guidelines for optimal, age-appropriate child occupant restraint. While optimal restraint has been shown to reduce the risk of injuries overall, its effect on specific types of injuries, in particular abdominal injuries, has not been demonstrated. Methods:Cross-sectional study of children aged younger than 16 years in crashes of insured vehicles in 15 states, with data collected via insurance claims records and a telephone survey. A probability sample of 10,927 crashes involving 17,132 restrained children, representing 210,926 children in 136,734 crashes was collected between December 1, 1998 and May 31, 2002. Restraint use was categorized as optimal or suboptimal based on current American Academy of Pediatrics guidelines. The outcome of interest, abdominal injury, was defined as any reported injury to an intra-abdominal organ of Abbreviated Injury Scale ≥2 severity. Results:Among all restrained children, optimal was noted in 59% (n = 120,473) and suboptimal in 41% (n = 83,555). An associated abdominal organ injury was noted in 0.05% (n = 62) of the optimal restrained group and 0.17% (n = 140) of the suboptimal group. After adjusting for age and seating position (front vs. rear), optimally restrained children were more than 3 times less likely [odds ratio 3.51 (95% confidence interval, 1.87–6.60, P < 0.001)] as suboptimally restrained children to suffer an abdominal injury. Of note, there were no abdominal injuries reported among optimally restrained 4- to 8-year-olds. Conclusions:Optimally restrained children are at a significantly lower risk of abdominal injury than children suboptimally restrained for age. This disparity emphasizes the need for aggressive education efforts aimed not only at getting children into restraint systems, but also the importance of optimal, age-appropriate restraint.

Collaboration


Dive into the Michael J. Kallan's collaboration.

Top Co-Authors

Avatar

Dennis R. Durbin

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Flaura Koplin Winston

Children's Hospital of Philadelphia

View shared research outputs
Top Co-Authors

Avatar

Brendan G. Carr

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Charles C. Branas

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Michael L. Nance

Children's Hospital of Philadelphia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Douglas J. Wiebe

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Ira R. Katz

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge