Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Oliver Bembom is active.

Publication


Featured researches published by Oliver Bembom.


American Journal of Epidemiology | 2009

Influenza Vaccination and Mortality: Differentiating Vaccine Effects From Bias

Bruce Fireman; Janelle Lee; Ned Lewis; Oliver Bembom; Mark J. van der Laan; Roger Baxter

It is widely believed that influenza (flu) vaccination of the elderly reduces all-cause mortality, yet randomized trials for assessing vaccine effectiveness are not feasible and the observational research has been controversial. Efforts to differentiate vaccine effectiveness from selection bias have been problematic. The authors examined mortality before, during, and after 9 flu seasons in relation to time-varying vaccination status in an elderly California population in which 115,823 deaths occurred from 1996 to 2005, including 20,484 deaths during laboratory-defined flu seasons. Vaccine coverage averaged 63%; excess mortality when the flu virus was circulating averaged 7.8%. In analyses that omitted weeks when flu circulated, the odds ratio measuring the vaccination-mortality association increased monotonically from 0.34 early in November to 0.56 in January, 0.67 in April, and 0.76 in August. This reflects the trajectory of selection effects in the absence of flu. In analyses that included weeks with flu and adjustment for selection effects, flu season multiplied the odds ratio by 0.954. The corresponding vaccine effectiveness estimate was 4.6% (95% confidence interval: 0.7, 8.3). To differentiate vaccine effects from selection bias, the authors used logistic regression with a novel case-centered specification that may be useful in other population-based studies when the exposure-outcome association varies markedly over time.


Journal of Bone and Joint Surgery, American Volume | 2009

Delayed Internal Fixation of Femoral Shaft Fracture Reduces Mortality Among Patients with Multisystem Trauma

Saam Morshed; Theodore Miclau; Oliver Bembom; Mitchell J. Cohen; M. Margaret Knudson; John M. Colford

BACKGROUND Fractures of the femoral shaft are common and have potentially serious consequences in patients with multiple injuries. The appropriate timing of fracture repair is controversial. The purpose of the present study was to assess the effect of timing of internal fixation on mortality in patients with multisystem trauma. METHODS We performed a retrospective cohort study with use of data from public and private trauma centers throughout the United States that were reported to the National Trauma Data Bank (version 5.0 for 2000 through 2004). The study included 3069 patients with multisystem trauma (Injury Severity Score, > or =15) who underwent internal fixation of a femoral shaft fracture. The time to treatment was defined in categories as the time from admission to internal fixation: t(0) (twelve hours or less), t(1) (more than twelve hours to twenty-four hours), t(2) (more than twenty-four hours to forty-eight hours), t(3) (more than forty-eight hours to 120 hours), and t(4) (more than 120 hours). The relative risk of in-hospital mortality when the four later periods were compared with the earliest one was estimated with inverse probability of treatment-weighted analysis. Subgroups with serious head or neck, chest, abdominal, and additional extremity injury were investigated. RESULTS When compared with that during the first twelve hours after admission, the estimated mortality risk was significantly lower in three time categories: t(1) (relative risk, 0.45; 95% confidence interval, 0.15 to 0.98; p = 0.03), t(3) (relative risk, 0.58; 95% confidence interval, 0.28 to 0.93; p = 0.03), and t(4) (relative risk, 0.43; 95% confidence interval, 0.10 to 0.94; p = 0.03). Patients with serious abdominal trauma (Abbreviated Injury Score, > or =3) experienced the greatest benefit from a delay of internal fixation beyond twelve hours (relative risk, 0.82 [95% confidence interval, 0.54 to 1.35] for patients with an Abbreviated Injury Score of <3, compared with 0.36 [95% confidence interval, 0.13 to 0.87] for those with an Abbreviated Injury Score of > or =3) (p value for effect modification, 0.09). CONCLUSIONS Delayed repair of femoral shaft fracture beyond twelve hours in patients with multisystem trauma, which may allow time for appropriate resuscitation, reduces mortality by approximately 50%. Patients with serious abdominal injury benefit most from delayed treatment. These results support delaying definitive treatment of long-bone injuries in patients with multisystem trauma as a means of so-called damage-control in order to reduce adverse outcomes.


Statistical Applications in Genetics and Molecular Biology | 2007

Supervised detection of conserved motifs in DNA sequences with cosmo

Oliver Bembom; Sunduz Keles; Mark J. van der Laan

A number of computational methods have been proposed for identifying transcription factor binding sites from a set of unaligned sequences that are thought to share the motif in question. We here introduce an algorithm, called cosmo, that allows this search to be supervised by specifying a set of constraints that the position weight matrix of the unknown motif must satisfy. Such constraints may be formulated, for example, on the basis of prior knowledge about the structure of the transcription factor in question. The algorithm is based on the same two-component multinomial mixture model used by MEME, with stronger reliance, however, on the likelihood principle instead of more ad-hoc criteria like the E-value. The intensity parameter in the ZOOPS and TCM models, for instance, is estimated based on a profile-likelihood approach, and the width of the unknown motif is selected based on BIC. These changes allow cosmo to outperform MEME even in the absence of any constraints, as evidenced by 2- to 3-fold greater sensitivity in some simulation studies. Additional improvements in performance can be achieved by selecting the model type (OOPS, ZOOPS, or TCM) data-adaptively or by supplying correctly specified constraints, especially if the motif appears only as a weak signal in the data. The algorithm can data-adaptively choose between working in a given constrained model or in the completely unconstrained model, guarding against the risk of supplying mis-specified constraints. Simulation studies suggest that this approach can offer 3 to 3.5 times greater sensitivity than MEME. The algorithm has been implemented in the form of a stand-alone C program as well as a web application that can be accessed at http://cosmoweb.berkeley.edu. An R package is available through Bioconductor (http://bioconductor.org).


Statistics in Medicine | 2009

Biomarker Discovery Using Targeted Maximum Likelihood Estimation: Application to the Treatment of Antiretroviral Resistant HIV Infection

Oliver Bembom; Maya L. Petersen; Soo-Yon Rhee; W. Jeffrey Fessel; Sandra E. Sinisi; Robert W. Shafer; Mark J. van der Laan

Researchers in clinical science and bioinformatics frequently aim to learn which of a set of candidate biomarkers is important in determining a given outcome, and to rank the contributions of the candidates accordingly. This article introduces a new approach to research questions of this type, based on targeted maximum-likelihood estimation of variable importance measures.The methodology is illustrated using an example drawn from the treatment of HIV infection. Specifically, given a list of candidate mutations in the protease enzyme of HIV, we aim to discover mutations that reduce clinical virologic response to antiretroviral regimens containing the protease inhibitor lopinavir. In the context of this data example, the article reviews the motivation for covariate adjustment in the biomarker discovery process. A standard maximum-likelihood approach to this adjustment is compared with the targeted approach introduced here. Implementation of targeted maximum-likelihood estimation in the context of biomarker discovery is discussed, and the advantages of this approach are highlighted. Results of applying targeted maximum-likelihood estimation to identify lopinavir resistance mutations are presented and compared with results based on unadjusted mutation-outcome associations as well as results of a standard maximum-likelihood approach to adjustment.The subset of mutations identified by targeted maximum likelihood as significant contributors to lopinavir resistance is found to be in better agreement with the current understanding of HIV antiretroviral resistance than the corresponding subsets identified by the other two approaches. This finding suggests that targeted estimation of variable importance represents a promising approach to biomarker discovery.


Epidemiology | 2009

Leisure-time physical activity and all-cause mortality in an elderly cohort.

Oliver Bembom; Mark J. van der Laan; Thaddeus J. Haight; Ira B. Tager

Background: Physical activity is one of the mainstays of secondary prevention in people with heart disease. It is not well understood, however, how the presence of heart disease or a history of habitual exercise prior to the study modify any mortality-sparing effects of leisure-time physical activity. Methods: We analyzed data from a well-described cohort of subjects aged 54 years and older at intake (median age, 70 years) from Sonoma, CA, studied between 1993 and 2001 with mortality follow-up until 2003. A history-adjusted marginal structural model was used to obtain counterfactual excess risk estimates that were pooled across the different time points. Additive interaction was examined by comparing these excess risk estimates across strata of age, heart disease, and precohort physical activity. Results: Estimates of the excess risk for 2-year all-cause mortality comparing Centers for Disease Control and Prevention–recommended levels of current physical activity to lower levels of activity ranged from −0.7% to −4.9% among subjects younger than 75 years of age and from −7.8% to −14.8% among older subjects. Heart disease or precohort physical activity were not found to modify the effect of leisure-time physical activity. Conclusions: Our data are consistent with the view that the mortality-sparing effect of recent physical activity is independent of the presence or absence of underlying cardiac disease and the pattern of past physical activity.


Statistics in Medicine | 2008

Analyzing sequentially randomized trials based on causal effect models for realistic individualized treatment rules.

Oliver Bembom; Mark J. van der Laan

In this paper, we argue that causal effect models for realistic individualized treatment rules represent an attractive tool for analyzing sequentially randomized trials. Unlike a number of methods proposed previously, this approach does not rely on the assumption that intermediate outcomes are discrete or that models for the distributions of these intermediate outcomes given the observed past are correctly specified. In addition, it generalizes the methodology for performing pairwise comparisons between individualized treatment rules by allowing the user to posit a marginal structural model for all candidate treatment rules simultaneously. This is particularly useful if the number of such rules is large, in which case an approach based on individual pairwise comparisons would be likely to suffer from too much sampling variability to provide an informative answer. In addition, such causal effect models represent an interesting alternative to methods previously proposed for selecting an optimal individualized treatment rule in that they immediately give the user a sense of how the optimal outcome is estimated to change in the neighborhood of the identified optimum. We discuss an inverse-probability-of-treatment-weighted (IPTW) estimator for these causal effect models, which is straightforward to implement using standard statistical software, and develop an approach for constructing valid asymptotic confidence intervals based on the influence curve of this estimator. The methodology is illustrated in two simulation studies that are intended to mimic an HIV/AIDS trial.


Archive | 2011

Improving the FDA Approval Process

Anup Malani; Oliver Bembom; Mark J. van der Laan

The FDA employs an average-patient standard when reviewing drugs: it approves a drug only if is safe and effective for the average patient in a clinical trial. It is common, however, for patients to respond differently to a drug. Therefore, the average-patient standard can reject a drug that benefits certain patient subgroups (false negative) and even approval a drug that harms other patient subgroups (false positives). These errors increase the cost of drug development – and thus health care – by wasting research on unproductive or unapproved drugs. The reason why the FDA sticks with an average patient standard is concern about opportunism by drug companies. With enough data dredging, a drug company can always find some subgroup of patients that appears to benefit from its drug, even if it truly does not. In this paper we offer alternatives to the average patient standard that reduce the risk of false negative without increasing false positives from drug company opportunism. These proposals combine changes to institutional design – evaluation of trial data by an independent auditor – with statistical tools to reinforce the new institutional design – specifically, to ensure the auditor is truly independent of drug companies. We illustrate our proposals by applying them to the results of a recent clinical trial of a cancer drug (motexafin gadolinium). Our analysis suggests that the FDA may have made a mistake in rejecting that drug.


Electronic Journal of Statistics | 2007

A practical illustration of the importance of realistic individualized treatment rules in causal inference

Oliver Bembom; Mark J. van der Laan

The effect of vigorous physical activity on mortality in the elderly is difficult to estimate using conventional approaches to causal inference that define this effect by comparing the mortality risks corresponding to hypothetical scenarios in which all subjects in the target population engage in a given level of vigorous physical activity. A causal effect defined on the basis of such a static treatment intervention can only be identified from observed data if all subjects in the target population have a positive probability of selecting each of the candidate treatment options, an assumption that is highly unrealistic in this case since subjects with serious health problems will not be able to engage in higher levels of vigorous physical activity. This problem can be addressed by focusing instead on causal effects that are defined on the basis of realistic individualized treatment rules and intention-to-treat rules that explicitly take into account the set of treatment options that are available to each subject. We present a data analysis to illustrate that estimators of static causal effects in fact tend to overestimate the beneficial impact of high levels of vigorous physical activity while corresponding estimators based on realistic individualized treatment rules and intention-to-treat rules can yield unbiased estimates. We emphasize that the problems encountered in estimating static causal effects are not restricted to the IPTW estimator, but are also observed with the G-computation estimator, the DR-IPTW estimator, and the targeted MLE. Our analyses based on realistic individualized treatment rules and intention-to-treat rules suggest that high levels of vigorous physical activity may confer reductions in mortality risk on the order of 15-30%, although in most cases the evidence for such an effect does not quite reach the 0.05 level of significance.


Archive | 2009

Accounting for Differences Among Patients in the FDA Approval Process

Anup Malani; Oliver Bembom; Mark J. van der Laan

The FDA employs an average-patient standard when reviewing drugs: it approves a drug only if the average patient (in clinical trials) does better on the drug than on control. It is common, however, for different patients to respond differently to a drug. Therefore, the average-patient standard can result in approval of a drug with significant negative effects for certain patient subgroups (false positives) and disapproval of drugs with significant positive effects for other patient subgroups (false negatives). Drug companies have a financial incentive to avoid false negatives. After their clinical trials reveal that their drug does not benefit the average patient, they conduct what is called post hoc subgroup analysis to highlight patients that benefit from the drug. The FDA rejects such analysis due to the risk of spurious results. With enough data dredging, a drug company can always find some patients that benefit from their drug. This paper asks whether there workable compromise between the FDA and drug companies. Specifically, we seek a drug approval process that can use post hoc subgroup analysis to eliminate false negatives but does not risk opportunistic behavior and spurious correlation. We recommend that the FDA or some other independent agent conduct subgroup analysis to identify patient subgroups that may benefit from a drug. Moreover, we suggest a number of statistical algorithms that operate as veil of ignorance rules to ensure that the independent agent is not indirectly captured by drug companies. We illustrate our proposal by applying it to the results of a recent clinical trial of a cancer drug (motexafin gadolinium) that was recently rejected by the FDA.


Archive | 2007

Identifying important explanatory variables for time-varying outcomes.

Oliver Bembom; Maya L. Petersen; Mark J. van der Laan

Many applications in modern biology measure a large number of genomic or proteomic covariates and are interested in assessing the impact of each of these covariates on a particular outcome of interest. In a study which follows a cohort of HIV-positive patients over time, for example, a researcher may genotype the virus infecting each patient to ascertain the presence or absence of a large number of mutations, in the hope of identifying mutations that affect how a patient’s plasma HIV RNA level (viral load) responds to a new drug regimen. Along with an estimate of the impact of each mutation on the time course of viral load, the researcher would generally like to have a measure of the statistical significance of these estimates in order to identify those mutations that are most likely to be genuinely related to the outcome. Such information could then be used to inform the decision of which drugs should be included in the regimen of a patient with a particular pattern of mutations.

Collaboration


Dive into the Oliver Bembom's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sunduz Keles

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Ira B. Tager

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christopher J. Logothetis

University of Texas MD Anderson Cancer Center

View shared research outputs
Top Co-Authors

Avatar

Dallas Williams

University of Texas MD Anderson Cancer Center

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge