Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Albert Vexler is active.

Publication


Featured researches published by Albert Vexler.


Computational Statistics & Data Analysis | 2010

Empirical likelihood ratios applied to goodness-of-fit tests based on sample entropy

Albert Vexler; Gregory Gurevich

The likelihood approach based on the empirical distribution functions is a well-accepted statistical tool for testing. However, the proof schemes of the Neyman-Pearson type lemmas induce consideration of density-based likelihood ratios to obtain powerful test statistics. In this article, we introduce the distribution-free density-based likelihood technique, applied to test for goodness-of-fit. We focus on tests for normality and uniformity, which are common tasks in applied studies. The well-known goodness-of-fit tests based on sample entropy are shown to be a product of the proposed empirical likelihood (EL) methodology. Although the efficiency of test statistics based on classes of entropy estimators has been widely addressed in the statistical literature, estimation of the sample entropy has been not invariantly defined, and hence this estimation produces tests that are difficult to be applied to real data studies. The proposed EL approach defines clear forms of the entropy-based tests. Monte Carlo simulation results confirm the preference of the proposed method from a power perspective. Real data examples study the proposed approach in practice.


Epidemiology | 2010

Linear regression with an independent variable subject to a detection limit.

Lei Nie; Haitao Chu; Chenglong Liu; Stephen R. Cole; Albert Vexler; Enrique F. Schisterman

Background: Linear regression with a left-censored independent variable X due to limit of detection (LOD) was recently considered by 2 groups of researchers: Richardson and Ciampi (Am J Epidemiol. 2003;157:355-363), and Schisterman et al (Am J Epidemiol. 2006;163:374-383). Methods: Both groups obtained consistent estimators for the regression slopes by replacing left-censored X with a constant, that is, the expectation of X given X below LOD E(X|X<LOD) in the former group and the sample mean of X given X above LOD in the latter. Results: Schisterman et al argued that their approach would be a better choice because the sample mean of X given X above LOD is available, whereas E(X|X<LOD) is unknown. Other substitution methods, such as replacing the left-censored values with LOD, or LOD/2,have been extensively used in the literature. Simulations were conducted to compare the performance under 2 scenarios in which the independent variable is normally and not normally distributed. Conclusion: Recommendations are given based on theoretical and simulation results. These recommendations are illustrated with one case study.


Reproductive Toxicology | 2010

Organochlorine pesticides and endometriosis

Maureen A. Cooney; Germaine M. Buck Louis; Mary L. Hediger; Albert Vexler; Paul J. Kostyniak

Limited study of persistent organochlorine pesticides (OCPs) and endometriosis has been conducted. One hundred women aged 18-40 years who were undergoing laparoscopy provided 20 cm(3) of blood for toxicologic analysis and surgeons completed operative reports regarding the presence of endometriosis. Gas chromatography with electron capture was used to quantify (ng/g serum) six OCPs. Logistic regression was utilized to estimate the adjusted odds ratios (aOR) and 95% confidence intervals (CI) for individual pesticides and groups based on chemical structure adjusting for current cigarette smoking and lipids. The highest tertile of aromatic fungicide was associated with a fivefold risk of endometriosis (aOR=5.3; 95% CI, 1.2-23.6) compared to the lowest tertile. Similar results were found for t-nonachlor and HCB. These are the first such findings in a laproscopic cohort that suggest an association between OCP exposure and endometriosis. More prospective studies are necessary to ensure temporal ordering and confirm these findings.


Paediatric and Perinatal Epidemiology | 2008

To pool or not to pool, from whether to when: applications of pooling to biospecimens subject to a limit of detection

Enrique F. Schisterman; Albert Vexler

Pooling of biological specimens has been utilised as a cost-efficient sampling strategy, but cost is not the unique limiting factor in biomarker development and evaluation. We examine the effect of different sampling strategies of biospecimens for exposure assessment that cannot be detected below a detection threshold (DT). The paper compares use of pooled samples to a randomly selected sample from a cohort in order to evaluate the efficiency of parameter estimates. The proposed approach shows that a pooling design is more efficient than a random sample strategy under certain circumstances. Moreover, because pooling minimises the amount of information lost below the DT, the use of pooled data is preferable (in a context of a parametric estimation) to using all available individual measurements, for certain values of the DT. We propose a combined design, which applies pooled and unpooled biospecimens, in order to capture the strengths of the different sampling strategies and overcome instrument limitations (i.e. DT). Several Monte Carlo simulations and an example based on actual biomarker data illustrate the results of the article.


Canadian Journal of Statistics-revue Canadienne De Statistique | 2004

Bayesian identifiability and misclassification in multinomial data

Tim B. Swartz; Yoel Haitovsky; Albert Vexler; Tae Y. Yang

The authors consider the Bayesian analysis of multinomial data in the presence of misclassi- fication. Misclassification of the multinomial cell entries leads to problems of identifiability which are categorized into two types. The first type, referred to as the permutation-type nonidentifiab ilities, may be handled with constraints that are suggested by the structure of the problem. Problems of identifiab ility of the second type are addressed with informative prior information via Dirichlet distributions. Computations are carried out using a Gibbs sampling algorithm.


Biometrics | 2010

Analyzing Incomplete Data Subject to a Threshold using Empirical Likelihood Methods: An Application to a Pneumonia Risk Study in an ICU Setting

Jihnhee Yu; Albert Vexler; Lili Tian

The initial detection of ventilator-associated pneumonia (VAP) for inpatients at an intensive care unit needs composite symptom evaluation using clinical criteria such as the clinical pulmonary infection score (CPIS). When CPIS is above a threshold value, bronchoalveolar lavage (BAL) is performed to confirm the diagnosis by counting actual bacterial pathogens. Thus, CPIS and BAL results are closely related and both are important indicators of pneumonia whereas BAL data are incomplete. To compare the pneumonia risks among treatment groups for such incomplete data, we derive a method that combines nonparametric empirical likelihood ratio techniques with classical testing for parametric models. This technique augments the study power by enabling us to use any observed data. The asymptotic property of the proposed method is investigated theoretically. Monte Carlo simulations confirm both the asymptotic results and good power properties of the proposed method. The method is applied to the actual data obtained in clinical practice settings and compares VAP risks among treatment groups.


Communications in Statistics - Simulation and Computation | 2009

Modifications of the Empirical Likelihood Interval Estimation with Improved Coverage Probabilities

Albert Vexler; Shuling Liu; Le Kang; Alan D. Hutson

The empirical likelihood (EL) technique has been well addressed in both the theoretical and applied literature in the context of powerful nonparametric statistical methods for testing and interval estimations. A nonparametric version of Wilks theorem (Wilks, 1938) can usually provide an asymptotic evaluation of the Type I error of EL ratio-type tests. In this article, we examine the performance of this asymptotic result when the EL is based on finite samples that are from various distributions. In the context of the Type I error control, we show that the classical EL procedure and the Students t-test have asymptotically a similar structure. Thus, we conclude that modifications of t-type tests can be adopted to improve the EL ratio test. We propose the application of the Chen (1995) t-test modification to the EL ratio test. We display that the Chen approach leads to a location change of observed data whereas the classical Bartlett method is known to be a scale correction of the data distribution. Finally, we modify the EL ratio test via both the Chen and Bartlett corrections. We support our argument with theoretical proofs as well as a Monte Carlo study. A real data example studies the proposed approach in practice.


Statistics in Medicine | 2009

Hybrid pooled-unpooled design for cost-efficient measurement of biomarkers

Enrique F. Schisterman; Albert Vexler; Sunni L. Mumford; Neil J. Perkins

Evaluating biomarkers in epidemiological studies can be expensive and time consuming. Many investigators use techniques such as random sampling or pooling biospecimens in order to cut costs and save time on experiments. Commonly, analyses based on pooled data are strongly restricted by distributional assumptions that are challenging to validate because of the pooled biospecimens. Random sampling provides data that can be easily analyzed. However, random sampling methods are not optimal cost-efficient designs for estimating means. We propose and examine a cost-efficient hybrid design that involves taking a sample of both pooled and unpooled data in an optimal proportion in order to efficiently estimate the unknown parameters of the biomarker distribution. In addition, we find that this design can be used to estimate and account for different types of measurement and pooling error, without the need to collect validation data or repeated measurements. We show an example where application of the hybrid design leads to minimization of a given loss function based on variances of the estimators of the unknown parameters. Monte Carlo simulation and biomarker data from a study on coronary heart disease are used to demonstrate the proposed methodology.


Communications in Statistics - Simulation and Computation | 2009

A Parametric Bootstrap Test for Comparing Heteroscedastic Regression Models

Lili Tian; Chang-Xing Ma; Albert Vexler

Testing equality of regression coefficients in several regression models is a common problem encountered in many applied fields. This article presents a parametric bootstrap (PB) approach and compares its performance to that of another simulation-based approach, namely, the generalized variable approach. Simulation studies indicate that the PB approach controls the Type I error rates satisfactorily regardless of the number of regression models and sample sizes whereas the generalized variable approach tends to be very liberal as the number of regression models goes up. The proposed PB approach is illustrated using a data set from stability study.


Statistics in Medicine | 2009

Generalized ROC curve inference for a biomarker subject to a limit of detection and measurement error

Neil J. Perkins; Enrique F. Schisterman; Albert Vexler

The receiver operating characteristic (ROC) curve is a tool commonly used to evaluate biomarker utility in clinical diagnosis of disease, especially during biomarker development research. Emerging biomarkers are often measured with random measurement error and subject to limits of detection that hinder their potential utility or mask an ability to discriminate by negatively biasing the estimates of ROC curves and subsequent area under the curve. Methods have been developed to correct the ROC curve for each of these types of sources of bias but here we develop a method by which the ROC curve is corrected for both simultaneously through replicate measures and maximum likelihood. Our method is evaluated via simulation study and applied to two potential discriminators of women with and without preeclampsia.

Collaboration


Dive into the Albert Vexler's collaboration.

Top Co-Authors

Avatar

Alan D. Hutson

Roswell Park Cancer Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jihnhee Yu

State University of New York System

View shared research outputs
Top Co-Authors

Avatar

Gregory Gurevich

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Aiyi Liu

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Lili Tian

University at Buffalo

View shared research outputs
Top Co-Authors

Avatar

Neil J. Perkins

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wan-Min Tsai

State University of New York System

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge