Simo Puntanen
University of Tampere
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Simo Puntanen.
The American Statistician | 1989
Simo Puntanen; George P. H. Styan
Abstract It is well known that the ordinary least squares estimator of Xβ in the general linear model E y = Xβ, cov y = σ2 V, can be the best linear unbiased estimator even if V is not a multiple of the identity matrix. This article presents, in a historical perspective, the development of the several conditions for the ordinary least squares estimator to be best linear unbiased. Various characterizations of these conditions, using generalized inverses and orthogonal projectors, along with several examples, are also given. In addition, a complete set of references is provided.
Archive | 2011
Simo Puntanen; George P. H. Styan; Jarkko Isotalo
In teaching linear statistical models to first-year graduate students or to final-year undergraduate students there is no way to proceed smoothly without matrices and related concepts of linear algebra; their use is really essential. Our experience is that making some particular matrix tricks very familiar to students can substantially increase their insight into linear statistical models (and also multivariate statistical analysis). In matrix algebra, there are handy, sometimes even very simple “tricks” which simplify and clarify the treatment of a problem—both for the student and for the professor. Of course, the concept of a trick is not uniquely defined—by a trick we simply mean here a useful important handy result. In this book we collect together our Top Twenty favourite matrix tricks for linear statistical models.
Linear Algebra and its Applications | 2000
Jürgen Groß; Simo Puntanen
Abstract In this paper, we consider a general partitioned linear model and a corresponding reduced model. We derive a necessary and sufficient condition for the BLUE for the expectation of the observable random vector under the reduced model to remain BLUE in the partitioned model. The former is shown to be always an admissible estimator under a mild condition. We also regard alternative linear estimators and their coincidence with the BLUE under the partitioned model.
Communications in Statistics-theory and Methods | 1992
Markku Nurhonen; Simo Puntanen
In this paper we consider the standard partitioned linear regression model where the model matrix is X = (X1: X2) the corresponding vector of unknown parameters being . In particular, we are interested in the best linear unbiased estimator (BLUE) of β2. Inspired by the article of Aigner and Balestra (1988), we consider a specific reduced model and show that the BLUE of β2 under the reduced model equals the corresponding BLUE under the original full model.M.
Aequationes Mathematicae | 1990
Jerzy K. Baksalary; Simo Puntanen
SummaryA recent note by Marshall and Olkin (1990), in which the Cauchy-Schwarz and Kantorovich inequalities are considered in matrix versions expressed in terms of the Loewner partial ordering, is extended to cover positive semidefinite matrices in addition to positive definite ones.
Communications in Statistics-theory and Methods | 1996
Simo Puntanen
Consider the linear model (y, Xβ V), where the model matrix X may not have a full column rank and V might be singular. In this paper we introduce a formula for the difference between the BLUES of Xβ under the full model and the model where one observation has been deleted. We also consider the partitioned linear regression model where the model matrix is (X1: X2) the corresponding vector of unknown parameters being (β′1 : β′2)′. We show that the BLUE of X1 β1 under a specific reduced model equals the corresponding BLUE under the original full model and consider some interesting consequences of this result.
Journal of Statistical Planning and Inference | 2000
Simo Puntanen; George P. H. Styan; Hans Joachim Werner
Abstract We offer two matrix-based proofs for the well-known result that the two conditions GX=X and GVQ=0 are necessary and sufficient for Gy to be the traditional best linear unbiased estimator (BLUE) of Xβ in the Gauss–Markov linear model {y,Xβ,V}, where y is an observable random vector with expectation vector E (y)=Xβ and dispersion matrix D (y)=V ; the matrix Q here is an arbitrary but fixed matrix whose range (column space) coincides with the null space of the transpose of X.
Statistical Data Analysis and Inference | 1989
Jerzy K. Baksalary; Simo Puntanen
Following a wider interpretation proposed by Rao (1971), weighted-least-squares estimators of XB under the general Gauss-Markov model {Y, Xs, σ 2 V} are considered in this paper as the family of all statistics of the form Xb w = X(X′WX) − X′WY, where W may be any matrix satisfying the condition k (X′WV) ⊂ k (X′WX) ≠ {0}. Several properties of such estimators are discussed, beginning with their unbiasedness and invariance to the choice of a generalized inverse and with the requirement that the dispersion matrix of Xb w is below σ 2 V with respect to the Lowner ordering. These two auxiliary properties are then applied to derive a number of criteria for Xb w to coincide with the best linear unbiased estimator of XB. Certain related questions, such as the problem of linear sufficiency and the concepts of unified-least-squares matrices, are also investigated. Finally, the estimators Xb w are examined from the point of view of their admissibility, thus providing a preliminary evaluation of their usefulness as biased estimators of Xs.
Communications in Statistics-theory and Methods | 1997
Simo Puntanen
Consider the linear model (y, Xβ, V}, where the model matrix X may not have a full column rank and V might be singular. Let the model matrix he partitioned as X = (X1 : X2) the corresponding vector of unknown parameters being (β′1β′2)′. In this paper we generalize some results obtained by Nurhonen and Puntanen (1992) and Puntanen (1996) which are related to the properties of the BLUE of X1β1 under some specific reduced models and under the original full model.
Linear Algebra and its Applications | 1996
Josip Pec̆arić; Simo Puntanen; George P.H. Styan
Abstract The well-known Cauchy-Schwarz and Kantorovich inequalities may be expressed in terms of vectors and a positive definite matrix. We consider what happens to these inequalities when the vectors are replaced by matrices, the positive definite matrix is allowed to be positive semidefinite singular, and the usual inequalities are replaced by Lowner partial orderings. Some examples in the context of linear statistical models are presented.