Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where C. Radhakrishna Rao is active.

Publication


Featured researches published by C. Radhakrishna Rao.


Journal of the American Statistical Association | 1972

Estimation of Variance and Covariance Components in Linear Models

C. Radhakrishna Rao

Abstract We write a linear model in the form , where is an unknown parameter and ξ is a hypothetical random variable with a given dispersion structure but containing unknown parameters called variance and covariance components. A new method of estimation called MINQUE (Minimum Norm Quadratic Unbiased Estimation) developed in a previous article [5] is extended for the estimation of variance and covariance components.


Journal of the American Statistical Association | 1970

Estimation of Heteroscedastic Variances in Linear Models

C. Radhakrishna Rao

Abstract Let Y=Xβ+e be a Gauss-Markoff linear model such that E(e) = 0 and D(e), the dispersion matrix of the error vector, is a diagonal matrix δ whose ith diagonal element is σi 2, the variance of the ith observation yi. Some of the σi 2 may be equal. The problem is to estimate all the different variances. In this article, a new method known as MINQUE (Minimum Norm Quadratic Unbiased Estimation) is introduced for the estimation of the heteroscedastic variances. This method satisfies some intuitive properties: (i) if S 1 is the MINQUE of Σ piσi 2 and S 2 that of Σqiσi 2 then S 1+S 2 is the MINQUE of σ(pi + qi )σi 2, (ii) it is invariant under orthogonal transformation, etc. Some sufficient conditions for the estimation of all linear functions of the σi 2 are given. The use of estimated variances in problems of inference on the β parameters is briefly indicated.


Journal of Multivariate Analysis | 1971

Minimum variance quadratic unbiased estimation of variance components

C. Radhakrishna Rao

The variance of a quadratic function of the random variables in a linear model is minimized to obtain locally best unbiased estimators (MIVQUE) of variance components. Condition for such estimators to be independent of the kurtosis of the variables is given. When the variables are normally distributed, MIVQUE coincides with MINQUE under the Euclidean norm of a matrix. Conditions under which MIVQUE has uniformly minimum variance property are obtained. Expressions are also given for MIMSQE (minimum mean square quadratic estimators).


Handbook of Statistics | 1980

1 Estimation of variance components

C. Radhakrishna Rao; Jürgen Kleffe

Publisher Summary This chapter discusses the usual mixed linear model on variance components. The unknown parameters of this model are called “variance components.” The ANOVA technique provides good estimators in balanced designs, but such estimators may be inefficient in more general linear models. A completely different approach is the ML (maximum likelihood) method. The likelihood of the unknown parameters is based on observed Y and the likelihood equations are obtained by computing the derivatives of likelihood with respect to the parameters. The marginal likelihood based on the maximal invariant of Y and obtained is called “marginal maximum likelihood (MML) equations.” The general large sample properties associated with ML estimators are misleading in the absence of studies on the orders of sample sizes for which these properties hold in particular cases. The bias in MML estimators may not be large even in small samples. The chapter also discusses a general method called “minimum norm quadratic estimation” (MINQE). The method is applicable in situations where ML and MML fail. It offers a wide scope in the choice of the norm depending on the nature of the model and prior information available and there is an automatic provision for incorporating available prior information on the unknown parameters. The MINQE equation provides a natural numerical algorithm for computing the ML or MML estimator. For a suitable choice of the norm, the MINQ estimators provide minimum variance estimators of θ when Y is normally distributed.


Journal of Multivariate Analysis | 1976

Characterizations of Multivariate Normality. I. Through Independence of some Statistics.

C. G. Khatri; C. Radhakrishna Rao

It is established that a vector variable (X1, ..., Xk) has a multivariate normal distribution if for each Xi the regression on the rest is linear and the conditional distribution about the regression does not depend on the rest of the variables, provided the regression coefficients satisfy some mild conditions. The result is extended to the case where Xi themselves are vector variables.


Journal of Multivariate Analysis | 1971

Estimation of variance and covariance components--MINQUE theory

C. Radhakrishna Rao


Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Statistics | 1967

Least squares theory using an estimated dispersion matrix and its application to measurement of signals

C. Radhakrishna Rao


Archive | 1971

Unified theory of linear estimation

C. Radhakrishna Rao


Journal of Multivariate Analysis | 1973

Representations of best linear unbiased estimators in the Gauss-Markoff model with a singular dispersion matrix☆

C. Radhakrishna Rao


Archive | 1999

Linear Models and Generalizations: Least Squares and Alternatives

C. Radhakrishna Rao; Helge Toutenburg; Shalabh; Christian Heumann

Collaboration


Dive into the C. Radhakrishna Rao's collaboration.

Top Co-Authors

Avatar

C. G. Khatri

Indian Statistical Institute

View shared research outputs
Top Co-Authors

Avatar

Shalabh

Indian Institute of Technology Kanpur

View shared research outputs
Researchain Logo
Decentralizing Knowledge