Ann A. Lazar
University of California, San Francisco
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ann A. Lazar.
Journal of Clinical Oncology | 2013
Antonio C. Wolff; Ann A. Lazar; Igor Bondarenko; August Garin; Stephen Brincat; Louis W.C. Chow; Yan Sun; Zora Neskovic-Konstantinovic; Rodrigo C. Guimaraes; Pierre Fumoleau; Arlene Chan; Soulef Hachemi; Andrew Strahs; Maria Cincotta; Anna Berkenblit; Mizue Krygowski; Lih Lisa Kang; Laurence Moore; Daniel F. Hayes
PURPOSE Recent data showed improvement in progression-free survival (PFS) when adding everolimus to exemestane in patients with advanced breast cancer experiencing recurrence/progression after nonsteroidal aromatase inhibitor (AI) therapy. Here, we report clinical outcomes of combining the mammalian target of rapamycin (mTOR) inhibitor temsirolimus with letrozole in AI-naive patients. PATIENTS AND METHODS This phase III randomized placebo-controlled study tested efficacy/safety of first-line oral letrozole 2.5 mg daily/temsirolimus 30 mg daily (5 days every 2 weeks) versus letrozole/placebo in 1,112 patients with AI-naive, hormone receptor-positive advanced disease. An independent data monitoring committee recommended study termination for futility at the second preplanned interim analysis (382 PFS events). RESULTS Patients were balanced (median age, 63 years; 10% stage III, 40% had received adjuvant endocrine therapy). Those on letrozole/temsirolimus experienced more grade 3 to 4 events (37% v 24%). There was no overall improvement in primary end point PFS (median, 9 months; hazard ratio [HR], 0.90; 95% CI, 0.76 to 1.07; P = .25) nor in the 40% patient subset with prior adjuvant endocrine therapy. An exploratory analysis showed improved PFS favoring letrozole/temsirolimus in patients ≤ age 65 years (9.0 v 5.6 months; HR, 0.75; 95% CI, 0.60 to 0.93; P = .009), which was separately examined by an exploratory analysis of 5-month PFS using subpopulation treatment effect pattern plot methodology (P = .003). CONCLUSION Adding temsirolimus to letrozole did not improve PFS as first-line therapy in patients with AI-naive advanced breast cancer. Exploratory analyses of benefit in younger postmenopausal patients require external confirmation.
Arthritis & Rheumatism | 2010
Kevin D. Deane; Colin O'Donnell; Wolfgang Hueber; Darcy S. Majka; Ann A. Lazar; Lezlie A. Derber; William R. Gilliland; Jess D. Edison; Jill M. Norris; William H. Robinson; V. Michael Holers
OBJECTIVE To evaluate levels of biomarkers in preclinical rheumatoid arthritis (RA) and to use elevated biomarkers to develop a model for the prediction of time to future diagnosis of seropositive RA. METHODS Stored samples obtained from 73 military cases with seropositive RA prior to RA diagnosis and from controls (mean 2.9 samples per case; samples collected a mean of 6.6 years prior to diagnosis) were tested for rheumatoid factor (RF) isotypes, anti-cyclic citrullinated peptide (anti-CCP) antibodies, 14 cytokines and chemokines (by bead-based assay), and C-reactive protein (CRP). RESULTS Preclinical positivity for anti-CCP and/or ≥2 RF isotypes was >96% specific for future RA. In preclinical RA, levels of the following were positive in a significantly greater proportion of RA cases versus controls: interleukin-1α (IL-1α), IL-1β, IL-6, IL-10, IL-12p40, IL-12p70, IL-15, fibroblast growth factor 2, flt-3 ligand, tumor necrosis factor α, interferon-γ-inducible 10-kd protein, granulocyte-macrophage colony-stimulating factor, and CRP. Also, increasing numbers of elevated cytokines/chemokines were present in cases nearer to the time of diagnosis. RA patients who were ≥40 years old at diagnosis had a higher proportion of samples positive for cytokines/chemokines 5-10 years prior to diagnosis than did patients who were <40 years old at diagnosis (P < 0.01). In regression modeling using only case samples positive for autoantibodies highly specific for future RA, increasing numbers of cytokines/chemokines were predictive of decreased time to diagnosis, and the predicted time to diagnosis based on cytokines/chemokines was longer in older compared with younger cases. CONCLUSION Levels of autoantibodies, cytokines/chemokines, and CRP are elevated in the preclinical period of RA development. In preclinical autoantibody-positive cases, the number of elevated cytokines/chemokines is predictive of the time of diagnosis of future RA in an age-dependent manner.
Annals of the Rheumatic Diseases | 2008
Darcy S. Majka; Kevin D. Deane; L. A. Parrish; Ann A. Lazar; A E Barón; C W Walker; M V Rubertone; W R Gilliland; Jill M. Norris; V. M. Holers
Objectives: To investigate factors that may influence the prevalence and timing of appearance of rheumatoid factor (RF) and anti-cyclic citrullinated peptide (anti-CCP) antibodies during the preclinical phase of rheumatoid arthritis (RA) development. Methods: 243 serial prediagnosis serum samples from 83 subjects with RA were examined for the presence of RF and anti-CCP antibodies. Results: Of the 83 cases, 47 (57%) and 51 (61%) subjects had at least one prediagnosis sample positive for RF or anti-CCP, respectively. Gender and race were not significantly associated with the prevalence or timing of preclinical antibody appearance. Preclinical anti-CCP positivity was strongly associated with the development of erosive RA (odds ratio = 4.64; 95% confidence interval 1.71 to 12.63; p<0.01), but RF was not (p = 0.60). Additionally, as age at the time of diagnosis of RA increased the duration of prediagnosis antibody positivity for RF and anti-CCP increased, with the longest duration of preclinical antibody positivity seen in patients diagnosed with RA over the age of 40. In no subjects did symptom onset precede the appearance of RF or anti-CCP antibodies. Conclusions: The period of time that RF and anti-CCP are present before diagnosis lengthens as the age at the time of diagnosis of RA increases. This finding suggests that factors such as genetic risk or environmental exposure influencing the temporal relationship between the development of RA-related autoantibodies and clinically apparent disease onset may differ with age.
Hepatology | 2004
Francis Y. Yao; Sammy Saab; Nathan M. Bass; Ryutaro Hirose; David Ly; Norah A. Terrault; Ann A. Lazar; Peter Bacchetti; Nancy L. Ascher; John P. Roberts
The current policy for determining priority for organ allocation is based on the model for end stage liver disease (MELD). We hypothesize that severity of graft dysfunction assessed by either the MELD score or the Child‐Turcotte‐Pugh (CTP) score correlates with mortality after liver retransplantation (re‐OLT). To test this hypothesis, we analyzed the outcome of 40 consecutive patients who received re‐OLT more than 90 days after primary orthotopic liver transplantation (OLT). The Kaplan‐Meier 1‐year and 5‐year survival rates after re‐OLT were 69% and 62%, respectively. The area under the curve (AUC) values generated by the receiver operating characteristics (ROC) curves were 0.82 (CI 0.70‐0.94) and 0.68 (CI 0.49‐0.86), respectively (P = .11), for the CTP and MELD models in predicting 1‐year mortality after re‐OLT. The 1‐year and 5‐year survival rates for patients with CTP scores less than 10 were 100% versus 50% and 40%, respectively, for CTP scores of at least 10 (P = .0006). Patients with MELD scores less than or equal to 25 had 1‐year and 5‐year survival rates of 89% and 79%, respectively, versus 53% and 47%, respectively, for MELD scores greater than 25 (P = .038). Other mortality predictors include hepatic encephalopathy, intensive care unit (ICU) stay, recurrent hepatitis C virus (HCV) infection, and creatinine level of 2 mg/dL or higher. Analysis of an independent cohort of 49 patients showed a trend for a correlation between CTP and MELD scores with 1‐year mortality, with AUC of 0.59 and 0.57, in respective ROC curves. In conclusion, our results suggest that severity of graft failure based on CTP and MELD scores may be associated with worse outcome after re‐OLT and provide a cautionary note for the “sickest first” policy of organ allocation. (HEPATOLOGY 2004;39:230–238.)
Journal of Clinical Oncology | 2010
Ann A. Lazar; Bernard F. Cole; Marco Bonetti; Richard D. Gelber
The discovery of biomarkers that predict treatment effectiveness has great potential for improving medical care, particularly in oncology. These biomarkers are increasingly reported on a continuous scale, allowing investigators to explore how treatment efficacy varies as the biomarker values continuously increase, as opposed to using arbitrary categories of expression levels resulting in a loss of information. In the age of biomarkers as continuous predictors (eg, expression level percentage rather than positive v negative), alternatives to such dichotomized analyses are needed. The purpose of this article is to provide an overview of an intuitive statistical approach-the subpopulation treatment effect pattern plot (STEPP)-for evaluating treatment-effect heterogeneity when a biomarker is measured on a continuous scale. STEPP graphically explores the patterns of treatment effect across overlapping intervals of the biomarker values. As an example, STEPP methodology is used to explore patterns of treatment effect for varying levels of the biomarker Ki-67 in the BIG (Breast International Group) 1-98 randomized clinical trial comparing letrozole with tamoxifen as adjuvant therapy for postmenopausal women with hormone receptor-positive breast cancer. STEPP analyses showed patients with higher Ki-67 values who were assigned to receive tamoxifen had the poorest prognosis and may benefit most from letrozole.
The Journal of Infectious Diseases | 2010
Adriana Weinberg; Ann A. Lazar; Gary O. Zerbe; Anthony R. Hayward; Ivan S. F. Chan; Rupert Vessey; Jeffrey L. Silber; Rob Roy MacGregor; Kenny H. Chan; Anne A. Gershon; Myron J. Levin
BACKGROUND Varicella-zoster virus (VZV)-specific cell-mediated immunity is important for protection against VZV disease. We studied the relationship between VZV cell-mediated immunity and age after varicella or VZV vaccination in healthy and human immunodeficiency virus (HIV)-infected individuals. METHODS VZV responder cell frequency (RCF) determinations from 752 healthy and 200 HIV-infected subjects were used to identify group-specific regression curves on age. RESULTS In healthy individuals with past varicella, VZV RCF peaked at 34 years of age. Similarly, VZV-RCF after varicella vaccine increased with age in subjects aged <1 to 43 years. In subjects aged 61-90 years, VZV RCF after zoster vaccine decreased with age. HIV-infected children had lower VZV RCF estimates than HIV-infected adults. In both groups, VZV RCF results were low and constant over age. Varicella vaccination of HIV-infected children with CD4 levels 20% generated VZV RCF values higher than wild-type infection and comparable to vaccine-induced responses of healthy children. CONCLUSIONS In immunocompetent individuals with prior varicella, VZV RCF peaked in early adulthood. Administration of varicella vaccine to HIV-infected or uninfected individuals aged >5 years generated VZV RCF values similar to those of immunocompetent individuals with immunity induced by wild-type infection. A zoster vaccine increased the VZV RCF of elderly adults aged <75 years to values higher than peak values induced by wild-type infection.
International Journal of Radiation Oncology Biology Physics | 2015
Mack Roach; Tania L. Ceron Lizarraga; Ann A. Lazar
PURPOSE The optimal treatment of clinically localized prostate cancer is controversial. Most studies focus on biochemical (PSA) failure when comparing radical prostatectomy (RP) with radiation therapy (RT), but this endpoint has not been validated as predictive of overall survival (OS) or cause-specific survival (CSS). We analyzed the available literature to determine whether reliable conclusions could be made concerning the effectiveness of RP compared with RT with or without androgen deprivation therapy (ADT), assuming current treatment standards. METHODS Articles published between February 29, 2004, and March 1, 2015, that compared OS and CSS after RP or RT with or without ADT were included. Because the GRADE (Grading of Recommendations, Assessment, Development, and Evaluation) system emphasis is on randomized controlled clinical trials, a reliability score (RS) was explored to further understand the issues associated with the study quality of observational studies, including appropriateness of treatment, source of data, clinical characteristics, and comorbidity. Lower RS values indicated lower reliability. RESULTS Fourteen studies were identified, and 13 were completely evaluable. Thirteen of the 14 studies (93%) were observational studies with low-quality evidence. The median RS was 12 (range, 5-18); the median difference in 10-year OS and CSS favored RP over RT: 10% and 4%, respectively. In studies with a RS ≤12 (average RS 9) the 10-year OS and CSS median differences were 17% and 6%, respectively. For studies with a RS >12 (average RS 15.5), the 10-year OS and CSS median differences were 5.5% and 1%, respectively. Thus, we observed an association between low RS and a higher percentage difference in OS and CSS. CONCLUSIONS Reliable evidence that RP provides a superior CSS to RT with ADT is lacking. The most reliable studies suggest that the differences in 10-year CSS between RP and RT are small, possibly <1%.
Multiple sclerosis and related disorders | 2016
Bardia Nourbakhsh; Julia Nunan-Saah; Amir-Hadi Maghzi; Laura Julian; Rebecca Spain; Chengshi Jin; Ann A. Lazar; Daniel Pelletier; Emmanuelle Waubant
OBJECTIVES Cognitive dysfunction in multiple sclerosis (MS) has been primarily examined in patients with advanced disease. Our objective was to study the longitudinal associations between brain magnetic resonance imaging (MRI) metrics and neuropsychological outcomes in patients with early MS. METHODS Relapsing MS patients within 12 months of onset were enrolled in a neuroprotection trial of riluzole versus placebo with up to 36 months of follow-up. MRI metrics included percent brain volume changes measured by SIENAX normalized measurements [normalized brain parenchymal volume (nBPV), normalized normal-appearing white and gray matter volume (nNAWMV and nGMV)] and T2 lesion volume (T2LV). A neuropsychological battery was performed annually. Mixed model regression measured time trends and associations between imaging and neuropsychological outcomes, adjusting for sex, age and education level. RESULTS Forty-three patients (mean age 36 years; 31 females) were enrolled within 7.5 ± 4.9 months of disease onset. 11.6% of patients with baseline cognitive assessment met conservative criteria for cognitive impairment. Compared to placebo, riluzole had no significant effect on neuropsychological performance; thus, both groups were combined for the association analyses. Baseline T2LV predicted subsequent changes in PASAT (p=0.006) and SDMT (p=0.002) scores. Longitudinal changes of T2LV were associated with changes in CVLT-II (p<0.001). CONCLUSION These findings suggest that cognitive impairment is relatively common in patients with very early MS. Baseline and longitudinal changes in the lesion load may be associated with some of the most frequently identified changes in cognitive function in MS.
Journal of Educational and Behavioral Statistics | 2011
Ann A. Lazar; Gary O. Zerbe
Researchers often compare the relationship between an outcome and covariate for two or more groups by evaluating whether the fitted regression curves differ significantly. When they do, researchers need to determine the “significance region,” or the values of the covariate where the curves significantly differ. In analysis of covariance (ANCOVA), the Johnson-Neyman procedure can be used to determine the significance region; for the hierarchical linear model (HLM), the Miyazaki and Maier (M-M) procedure has been suggested. However, neither procedure can assume nonnormally distributed data. Furthermore, the M-M procedure produces biased (downward) results because it uses the Wald test, does not control the inflated Type I error rate due to multiple testing, and requires implementing multiple software packages to determine the significance region. In this article, we address these limitations by proposing solutions for determining the significance region suitable for generalized linear (mixed) model (GLM or GLMM). These proposed solutions incorporate test statistics that resolve the biased results, control the Type I error rate using Scheffé’s method, and uses a single statistical software package to determine the significance region.
Clinical Trials | 2016
Ann A. Lazar; Marco Bonetti; Bernard F. Cole; Wai-Ki Yip; Richard D. Gelber
Background: Investigators conducting randomized clinical trials often explore treatment effect heterogeneity to assess whether treatment efficacy varies according to patient characteristics. Identifying heterogeneity is central to making informed personalized healthcare decisions. Treatment effect heterogeneity can be investigated using subpopulation treatment effect pattern plot (STEPP), a non-parametric graphical approach that constructs overlapping patient subpopulations with varying values of a characteristic. Procedures for statistical testing using subpopulation treatment effect pattern plot when the endpoint of interest is survival remain an area of active investigation. Methods: A STEPP analysis was used to explore patterns of absolute and relative treatment effects for varying levels of a breast cancer biomarker, Ki-67, in the phase III Breast International Group 1-98 randomized clinical trial, comparing letrozole to tamoxifen as adjuvant therapy for postmenopausal women with hormone receptor–positive breast cancer. Absolute treatment effects were measured by differences in 4-year cumulative incidence of breast cancer recurrence, while relative effects were measured by the subdistribution hazard ratio in the presence of competing risks using O–E (observed-minus-expected) methodology, an intuitive non-parametric method. While estimation of hazard ratio values based on O–E methodology has been shown, a similar development for the subdistribution hazard ratio has not. Furthermore, we observed that the subpopulation treatment effect pattern plot analysis may not produce results, even with 100 patients within each subpopulation. After further investigation through simulation studies, we observed inflation of the type I error rate of the traditional test statistic and sometimes singular variance–covariance matrix estimates that may lead to results not being produced. This is due to the lack of sufficient number of events within the subpopulations, which we refer to as instability of the subpopulation treatment effect pattern plot analysis. We introduce methodology designed to improve stability of the subpopulation treatment effect pattern plot analysis and generalize O–E methodology to the competing risks setting. Simulation studies were designed to assess the type I error rate of the tests for a variety of treatment effect measures, including subdistribution hazard ratio based on O–E estimation. This subpopulation treatment effect pattern plot methodology and standard regression modeling were used to evaluate heterogeneity of Ki-67 in the Breast International Group 1-98 randomized clinical trial. Results: We introduce methodology that generalizes O–E methodology to the competing risks setting and that improves stability of the STEPP analysis by pre-specifying the number of events across subpopulations while controlling the type I error rate. The subpopulation treatment effect pattern plot analysis of the Breast International Group 1-98 randomized clinical trial showed that patients with high Ki-67 percentages may benefit most from letrozole, while heterogeneity was not detected using standard regression modeling. Conclusion: The STEPP methodology can be used to study complex patterns of treatment effect heterogeneity, as illustrated in the Breast International Group 1-98 randomized clinical trial. For the subpopulation treatment effect pattern plot analysis, we recommend a minimum of 20 events within each subpopulation.