Mortaza Jamshidian
California State University, Fullerton
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mortaza Jamshidian.
Journal of Educational and Behavioral Statistics | 1999
Mortaza Jamshidian; Peter M. Bentler
We consider maximum likelihood (ML) estimation of mean and covariance structure models when data are missing. Expectation maximization (EM), generalized expectation maximization (GEM), Fletcher-Powell, and Fisher-scoring algorithms are described for parameter estimation. It is shown how the machinery within a software that handles the complete data problem can be utilized to implement each algorithm. A numerical differentiation method for obtaining the observed information matrix and the standard errors is given. This method also uses the complete data program machinery. The likelihood ratio test is discussed for testing hypotheses. Three examples are used to compare the cost of the four algorithms mentioned above, as well as to illustrate the standard error estimation and the test of hypothesis considered. The sensitivity of the ML estimates as well as the mean imputed and listwise deletion estimates to missing data mechanisms is investigated using three artificial data sets that are missing completely at random (MCAR), missing at random (MAR), and neither MCAR nor MAR.
Journal of The Royal Statistical Society Series B-statistical Methodology | 1997
Mortaza Jamshidian; Robert I. Jennrich
The EM algorithm is a popular method for maximum likelihood estimation. Its simplicity in many applications and desirable convergence properties make it very attractive. Its sometimes slow convergence, however, has prompted researchers to propose methods to accelerate it. We review these methods, classifying them into three groups: pure, hybrid and EM-type accelerators. We propose a new pure and a new hybrid accelerator both based on quasi-Newton methods and numerically compare these and two other quasi-Newton accelerators. For this we use examples in each of three areas: Poisson mixtures, the estimation of covariance from incomplete data and multivariate normal mixtures. In these comparisons, the new hybrid accelerator was fastest on most of the examples and often dramatically so. In some cases it accelerated the EM algorithm by factors of over 100. The new pure accelerator is very simple to implement and competed well with the other accelerators. It accelerated the EM algorithm in some cases by factors of over 50. To obtain standard errors, we propose to approximate the inverse of the observed information matrix by using auxiliary output from the new hybrid accelerator. A numerical evaluation of these approximations indicates that they may be useful at least for exploratory purposes.
Journal of the American Statistical Association | 1993
Mortaza Jamshidian; Robert I. Jennrich
The EM algorithm is a very popular and widely applicable algorithm for the computation of maximum likelihood estimates. Although its implementation is generally simple, the EM algorithm often exhibits slow convergence and is costly in some areas of application. Past attempts to accelerate the EM algorithm have most commonly been based on some form of Aitken acceleration. Here we propose an alternative method based on conjugate gradients. The key, as we show, is that the EM step can be viewed (approximately at least) as a generalized gradient, making it natural to apply generalized conjugate gradient methods in an attempt to accelerate the EM algorithm. The proposed method is relatively simple to implement and can handle Problems with a large number of parameters, an important feature of most EM algorithms. To demonstrate the effectiveness of the proposed acceleration method, we consider its application to several Problems in each of the following areas: estimation of a covariance matrix from incomplete mu...
Journal of The Royal Statistical Society Series B-statistical Methodology | 2000
Mortaza Jamshidian; Robert I. Jennrich
log-likelihood. The well-known SEM algorithm uses the second approach. We consider three additional algorithms: one that uses the first approach and two that use the second. We evaluate the complexity and precision of these three and the SEM algorithm in seven examples. The first is a single-parameter example used to give insight. The others are three examples in each of two areas of EM application: Poisson mixture models and the estimation of covariance from incomplete data. The examples show that there are algorithms that are much simpler and more accurate than the SEM algorithm. Hopefully their simplicity will increase the availability of standard error estimates in EM applications. It is shown that, as previously conjectured, a symmetry diagnostic can accurately estimate errors arising from numerical differentiation. Some issues related to the speed of the EM algorithm and algorithms that differentiate the EM operator are identified.
Journal of the American Statistical Association | 2004
Wei Liu; Mortaza Jamshidian; Ying Zhang
Research on multiple comparison during the past 50 years or so has focused mainly on the comparison of several population means. Several years ago, Spurrier considered the multiple comparison of several simple linear regression lines. He constructed simultaneous confidence bands for all of the contrasts of the simple linear regression lines over the entire range (-∞, ∞) when the models have the same design matrices. This article extends Spurriers work in several directions. First, multiple linear regression models are considered and the design matrices are allowed to be different. Second, the predictor variables are either unconstrained or constrained to finite intervals. Third, the types of comparison allowed can be very flexible, including pairwise, many–one, and successive. Two simulation methods are proposed for the calculation of critical constants. The methodologies are illustrated with examples.
Computational Statistics & Data Analysis | 2004
Mortaza Jamshidian
This work proposes a globally convergent algorithm, based on gradient projections, for maximum likelihood estimation under linear equality and inequality restrictions (constraints) on parameters. The proposed algorithm has wide applicability, and as an important special case its application to restricted expectation-maximization (EM) problems is described. Often, a class of algorithms that we call expectation-restricted-maximization (ERM) is used to deal with constraints in the EM setting. We describe two such ERM algorithms that handle linear equality constraints, and discuss their convergence. As we explain, the assumptions for global convergence of one of the algorithms may be practically too restrictive, and as such we suggest a modification. We provide an example where the second algorithm fails. In general we argue that the gradient projection (GP) algorithm is superior to ERM algorithms in terms of simplicity of implementation and time to converge. We give an example of application of GP to parameter estimation of mixtures of normal densities where linear inequality constraints are imposed, and compare CPU times required for the algorithms discussed.
Journal of Computational and Graphical Statistics | 2005
Wei Liu; Mortaza Jamshidian; Ying Zhang; J Donnelly
This article presents a method for the construction of a simultaneous confidence band for the normal-error multiple linear regression model. The confidence bands considered have their width proportional to the standard error of the estimated regression function, and the predictor variables are allowed to be constrained in intervals. Past articles in this area gave exact bands only for the simple regression model. When there is more than one predictor variable, only conservative bands are proposed in the statistics literature. This article advances this methodology by providing simulation-based confidence bands for regression models with any number of predictor variables. Additionally, a criterion is proposed to assess the sensitivity of a simultaneous confidence band. This criterion is defined to be the probability that a false linear regression model is excluded from the band at least at one point and hence this false linear regression model is correctly declared as a false model by the band. Finally, the article considers and compares several computational algorithms for obtaining the confidence band.
Journal of Computational and Graphical Statistics | 2004
Ying Zhang; Mortaza Jamshidian
In this article, we study algorithms for computing the nonparametric maximum likelihood estimator (NPMLE) of the failure function with two types of censored data: doubly censored data and (type 2) interval-censored data. We consider two projection methods, namely the iterative convex minorant algorithm (ICM) and a generalization of the Rosen algorithm (GR) and compare these methods to the well-known EM algorithm. The comparison conducted via simulation studies shows that the hybrid algorithms that alternately use the EM and GR for doubly censored data or, alternately, use the EM and ICM for (type 2) interval-censored data appear to be much more efficient than the EM, especially in large sample situation.
Journal of Statistical Computation and Simulation | 1998
Mortaza Jamshidian; Peter M. Bentler
In the past several algorithms have been given to solve the minimum trace factor analysis (MTFA) and the constrained minimum trace factor analysis (CMTFA) problems. Some of these algorithms, depending on the initial value, may converge to points that are not the solution to the above problems, some converge linearly, and some are quadratically convergent but are somewhat difficult to implement. In this paper we propose modified Han–Powell algorithms to solve the MTFA and CMTFA problems. The modifications deal with the problem of multiple eigenvalues. The proposed algorithms are globally convergent and their speed is locally superlinear. We also give a modified Han–Powell algorithm to solve the weighted minimum trace factor analysis (WMTFA) problem. This method is also locally superlinear and is simpler to implement as compared to methods proposed earlier. Four examples are given to show the performance of the proposed algorithms. More generally, our experience with these algorithms shows that, starting at...
Archive | 1997
Mortaza Jamshidian
EM algorithm is a popular algorithm for obtaining maximum likelihood estimates. Here we propose an EM algorithm for the factor analysis model. This algorithm extends a previously proposed EM algorithm to handle problems with missing data. It is simple to implement and is the most storage efficient among its competitors. We apply our algorithm to three examples and discuss the results. For problems with reasonable amount of missing data, it converges in reasonable time. For problems with large amount of missing data EM algorithm is usually slow. For such cases we successfully apply two EM acceleration methods to our examples. Finally, we discuss different methods of obtaining standard errors and in particular we recommend a method based on center difference approximation to the derivative.