Majid Mojirsheibani
California State University, Northridge
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Majid Mojirsheibani.
Journal of the American Statistical Association | 1999
Majid Mojirsheibani
Abstract I consider a method for combining different classifiers to develop more effective classification rules. The proposed combined classifier, which turns out to be strongly consistent, is quite simple to use in real applications. It is also shown that this combined classifier is, (strongly) asymptotically, at least as good as any one of the individual classifiers. In addition, if one of the individual classifiers is already Bayes optimal (asymptotically), then so is the combined classifier.
Statistics & Probability Letters | 1997
Majid Mojirsheibani
In this article we propose a data-based method for constructing combined classifiers. The resulting classifiers, which are linear in nature, turn out to be consistent.
Communications in Statistics - Simulation and Computation | 2002
Majid Mojirsheibani
ABSTRACT In this article we consider two methods for combining a number of individual classifiers in order to construct more effective classification rules. The effectiveness of these methods, as measured by a comparison of their misclassification error rates with those of the individual classifiers, is assessed via a number of examples that involve simulated data. We also compare the results to those of two existing combining procedures.
Statistics | 2007
Majid Mojirsheibani
This article presents two approximations to the error quantity , where f n is the kernel density estimate of the underlying density f and μ is a measure on the Borel sets of ℜ. The first method proposes the easier-to-compute sum representation . We study the difference |J n (p)−I n (p)| in terms of its rates of convergence to zero. A central limit theorem for J n (p) will then follow immediately. Our second approximation is based on a bootstrap version of I n (p). Using a recent result of [Csörgő, M., Horvath, L. and Kokoszka, P., 2000, Approximations for bootstrapped empirical processes. Proceedings of the American Mathematical Society, 128, 2457–2464.] on approximations of bootstrap empirical processes, we establish both conditional and unconditional bootstrap central limit theorems. The focus of the article is on the important case where the measure μ is either the Lebesgue measure or the measure with density f. For the bootstrap, the more general case where dμ(t) = w(t)d t, w(t)>0 will also be briefly addressed.
Electronic Journal of Statistics | 2009
Shojaeddin Chenouri; Majid Mojirsheibani; Zahra Montazeri
Methods are proposed to construct empirical measures when there are missing terms among the components of a random vector. Fur- thermore, Vapnik-Chevonenkis type exponential bounds are obtained on the uniform deviations of these estimators, from the true probabilities. These results can then be used to deal with classical problems such as statistical classification, via empirical risk minimization, when there are missing covariates among the data. Another application involves the uni- form estimation of a distribution function. AMS 2000 subject classifications: Primary 60G50, 62G15; secondary 62H30.
Journal of Nonparametric Statistics | 2017
Majid Mojirsheibani; William Pouliot
ABSTRACT A weighted bootstrap method is proposed to approximate the distribution of the () norms of two-sample statistics involving kernel density estimators. Using an approximation theorem of Horváth, Kozkoszka and Steineback [(2000) ‘Approximations for Weighted Bootstrap Processes with an Application’, Statistics and Probability Letters, 48, 59–70], that allows one to replace the weighted bootstrap empirical process by a sequence of Gaussian processes, we establish an unconditional bootstrap central limit theorem for such statistics. The proposed method is quite straightforward to implement in practice. Furthermore, through some simulation studies, it will be shown that, depending on the weights chosen, the proposed weighted bootstrap approximation can sometimes outperform both the classical large-sample theory as well as Efrons [(1979) ‘Bootstrap Methods: Another Look at the Jackknife’, Annals of Statistics, 7, 1–26] original bootstrap algorithm.
Journal of Statistical Computation and Simulation | 2015
Majid Mojirsheibani; Zahra Montazeri
Methods are proposed to combine several individual classifiers in order to develop more accurate classification rules. The proposed algorithm uses Rademacher–Walsh polynomials to combine M (≥2) individual classifiers in a nonlinear way. The resulting classifier is optimal in the sense that its misclassification error rate is always less than, or equal to, that of each constituent classifier. A number of numerical examples (based on both real and simulated data) are also given. These examples demonstrate some new, and far-reaching, benefits of working with combined classifiers.
Journal of Multivariate Analysis | 2012
Majid Mojirsheibani
This article considers a weighted bootstrap method to approximate the distribution of the maximal deviatiBootstrap-MBR.texon of kernel density estimates over general connected compact sets. The theoretical validity of this approximation is also established. Furthermore, simulation studies show that, depending on the choice of the weights, the proposed weighted bootstrap can have a superior finite-sample performance as compared to both the large-sample theory as well as Efrons (1979) original bootstrap.
Computational Statistics & Data Analysis | 2002
Majid Mojirsheibani
Abstract This article proposes a two-step iterative procedure to improve the misclassification error rate of an initial classification rule. The first step involves an iterative method for generating a sequence of classifiers from the initial one; this is based on the augmentation of the feature vector with some new pseudo-predictors. Unlike other components of the feature vector, these new pseudo-predictors tend to provide information primarily on the performance or correctness of the classifier itself. The second step of the proposed procedure “pools together” the classifiers constructed in step one in order to produce a new classifier which is far more effective (in an asymptotic sense) than the initial classifier. In addition to these results, a data-splitting approach for selecting the number of iterations will also be discussed. Both the mechanics and the asymptotic validity of the proposed procedure are studied.
Communications in Statistics-theory and Methods | 2017
Majid Mojirsheibani; Kevin Manley; William Pouliot
ABSTRACT We consider the problem of estimation of a density function in the presence of incomplete data and study the Hellinger distance between our proposed estimators and the true density function. Here, the presence of incomplete data is handled by utilizing a Horvitz–Thompson-type inverse weighting approach, where the weights are the estimates of the unknown selection probabilities. We also address the problem of estimation of a regression function with incomplete data.