Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Narayan C. Giri is active.

Publication


Featured researches published by Narayan C. Giri.


Communications in Statistics - Simulation and Computation | 1995

On approximations involving the beta distribution

Benedikt Jóhannesson; Narayan C. Giri

In part 1 we give two approximations where a linear combination of central beta variables is approximated by a single beta variable. Extensive numerical computations are done to compare these approximations for the case of two central beta variable with parameters (b1,a1), (b2,a2) with a special attention for the case b1 √ b2. They show that these approximations are sufficiently good for most cases. The second part of this paper compare four similar approximations for noncentral beta variables and conclude that three of them are sufficiently good for most practical purposes.


Communications in Statistics-theory and Methods | 1993

James-Stein estimation with constraints on the norm

E. Marchand; Narayan C. Giri

Consider the problem of estimating a multivariate mean 0(pxl), p>3, based on a sample x^ ..., xn with quadratic loss function. We find an optimal decision rule within the class of James-Stein type decision rules when the underlying distribution is that of a variance mixture of normals and when the norm ||0|| is known. When the norm is restricted to a known interval, typically no optimal James-Stein type rule exists but we characterize a minimal complete class within the class of James-Stein type decision rules. We also characterize the subclass of James-Stein type decision rules that dominate the sample mean.


Journal of Statistical Computation and Simulation | 1981

Tests for the mean vector under intraclass covariance structure

Bernard Clément; Sukharanyan Chakraborty; Bimal Kumar Sinha; Narayan C. Giri

Suppose are i.i.d. variables where Geisser [JASA, 1963] derived the likelihood ratio tests for the testing problems: (1) specified. (2) (i.e. the components of μ are all equal) vs H 1:μ arbitrary. Here we discuss these two problems from the invariance point of view. Several simple and easily workable tests are proposed. Their local powers in small samples are evaluated by simulation and compared. Some general conclusions are drawn.


Annals of the Institute of Statistical Mathematics | 1988

Locally minimax tests in symmetrical distributions

Narayan C. Giri

In this paper we give an extension of the theory of local minimax property of Giri and Kiefer (1964, Ann. Math. Statist., 35, 21–35) to the family of elliptically symmetric distributions which contains the multivariate normal distribution as a member.


Annals of the Institute of Statistical Mathematics | 1992

ON AN OPTIMUM TEST OF THE EQUALITY OF TWO COVARIANCE MATRICES

Narayan C. Giri

Let X: p × 1, Y: p × 1 be independently and normally distributed p-vectors with unknown means ξ1, ξ2 and unknown covariance matrices Σ1, Σ2 (>0) respectively. We shall show that Pillais test, which is locally best invariant, is locally minimax for testing H0: Σ1=Σ2 against the alternative H1: % MathType!MTEF!2!1!+-% feaafeart1ev1aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn% hiov2DGi1BTfMBaeXafv3ySLgzGmvETj2BSbqefm0B1jxALjhiov2D% aebbfv3ySLgzGueE0jxyaibaiiYdd9qrFfea0dXdf9vqai-hEir8Ve% ea0de9qq-hbrpepeea0db9q8as0-LqLs-Jirpepeea0-as0Fb9pgea% 0lrP0xe9Fve9Fve9qapdbaqaaeGacaGaaiaabeqaamaabaabcaGcba% GaaeiDaiaabkhacaqGOaWaaabmaeaadaaeqaqaaiabgkHiTiaadMea% caGGPaGaaiiiaiabg2da9iaacccacqaHdpWCcaGGGaGaeyOpa4Jaai% iiaiaaicdaaSqaaiaaigdaaeqaniabggHiLdaaleaacaqGYaaabaGa% aeylaiaabgdaa0GaeyyeIuoaaaa!4E3F!\[{\rm{tr(}}\sum\nolimits_{\rm{2}}^{{\rm{ - 1}}} {\sum\nolimits_1 { - I) = \sigma > 0} }\]as σ→0. However this test is not of type D among G-invariant tests.


Communications in Statistics-theory and Methods | 1991

Improved estimation of variance components in balanced hierarchical mixed models

Kalyan Das; Q. Meneghini; Narayan C. Giri

The problem of simultaneous estimation of variance components is considered for a balanced hierarchical mixed model under a sum of squared error loss. A new class of estimators is suggested which dominate the usual sensible estimators. These estimators shrink towards the geometric mean of the component mean squares that appear in the ANOVA table. Numerical results are tabled to exhibit the improvement in risk under a simple model.


Statistische Hefte | 1983

A test of bivariate independence with additional data

M. Ahmad; Narayan C. Giri

We have studied in this paper the problem of testing independence of two normally distributed random variables (X,Y) on the basis of random paired sample of size N > 2 on both X and Y and independent samples of sizes M > 1, L > 1 on X and Y respectively, calling the latter set of data as the additional data. I t may be noted that this type of data sometimes arises in Biometrics and Econometrics studies. The usual statistical analysis is based only on the paired sample, but one may hope to get bet ter results using the additional data although it might seem at first sight that the additional data is irrelevant for testing independence. In the context of our problem of testing bivariate independence, reference may be made to the work of Eaton and Kariya [1974] who had additional data only on one of the variables and proved that the L.R.T., which does not depend on the additional data, is a conditional uniformly most powerful invariant (under a suitable group of transformations) test. They also obtained a locally most powerful invariant test. Given the additional data on both X and Y we have derived below a locally most powerful invariant test of H 0 : p = 0 vs. H 1 : p > 0, where p is the correlation coefficient between X and Y, and compared its performance with that of the usual r-test. The conclusion is that the use of additional information is most desirable in this problem.


Multivariate Statistical Inference | 1977

Estimators of Parameters and Their Functions in a Multivariate Normal Distribution

Narayan C. Giri

This chapter discusses the methods to estimate parameters of probability density function and some of their functions, namely, multiple correlation coefficient, partial correlation coefficients of different orders, and regression coefficients on the basis of information contained in a random sample of size N . The method of maximum likelihood is successful in finding suitable estimators of parameters in many problems. Under certain regularity conditions on the probability density function, the maximum likelihood estimator is strongly consistent in large samples. The method of maximum likelihood in statistical estimation states that if θ is a maximum likelihood estimator of θ ∈ Ω, then f (θ) is a maximum likelihood estimator of f (θ), where f (θ) is some function of θ. A Bayes estimator of θ in regard to the prior density h(θ) is the estimator d 0 ∈ D , which takes the value d 0 ( x ) for X = x and minimizes the posterior risk given X = x . The Bayes estimator d 0 also minimizes the prior risk.


Multivariate Statistical Inference | 1977

CHAPTER IX – Discriminant Analysis

Narayan C. Giri

Publisher Summary This chapter provides an overview on discriminant analysis. The discriminant analysis consists of assigning an individual or a group of individuals to one of the several known or unknown distinct populations, on the basis of observations on several characters of the individual or the group and a sample of observations on these characters from the populations if these are unknown. The formulation of the problem of discriminant analysis assumes that the functional form of fi for each i, is known and that the fi are different for different i. However, the parameters involved in fi may be known or unknown. If they are unknown, supplementary information about these parameters is obtained through additional samples from these populations. These additional samples are generally called training samples. A classification rule divides the space E into disjoint and exhaustive regions R1, …, Rk by R. A classification rule R is said to be admissible if there does not exist a classification rule R* which is better than R.


Journal of Multivariate Analysis | 1992

Best equivariant estimation in curved covariance models

François Perron; Narayan C. Giri

Let X1, ..., Xn (n > p > 2) be independently and identically distributed p-dimensional normal random vectors with mean vector [mu] and positive definite covariance matrix [Sigma] and let [Sigma] and . be partioned as1 p-1 1 p-1. We derive here the best equivariant estimators of the regression coefficient vector [beta] = [Sigma]22-1[Sigma]21 and the covariance matrix [Sigma]22 of covariates given the value of the multiple correlation coefficient [varrho]2 = [Sigma]11-1[Sigma]12[Sigma]22-1[Sigma]21. Such problems arise in practice when it is known that [varrho]2 is significant. Let R2 = S11-1S12S22-1S21. If the value of [varrho]2 is such that terms of order (R[varrho])2 and higher can be neglected, the best equivariant estimator of [beta] is approximately equal to (n -1)(p - 1)-1 [varrho]2S22-1S21, where S22-1S21 is the maximum likelihood estimator of [beta]. When [varrho]2 = 0, the best equivariant estimator of [Sigma]22 is (n - p + 1)-1S22 is the maximum likelihood estimator of [Sigma]22.

Collaboration


Dive into the Narayan C. Giri's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bernard Clément

École Polytechnique de Montréal

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

E. Marchand

University of New Brunswick

View shared research outputs
Top Co-Authors

Avatar

M. Ahmad

Université du Québec

View shared research outputs
Top Co-Authors

Avatar

Q. Meneghini

Université de Montréal

View shared research outputs
Top Co-Authors

Avatar

Sujit K. Basu

Université de Montréal

View shared research outputs
Researchain Logo
Decentralizing Knowledge