Kesar Singh
Rutgers University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kesar Singh.
Journal of the American Statistical Association | 1993
Regina Y. Liu; Kesar Singh
Let F and G be the distribution functions of two given populations on Rp, p ≥ 1. We introduce and study a Parameter Q = Q(F, G), which measures the Overall “outlyingness” of population G relative to population F. The Parameter Q can be defined using any concept of data depth. Its value ranges from 0 to 1, and is .5 when F and G are identical. We show that within the dass of elliptical distributions when G departs from F in location or G has a larger spread, or both, the value of Q dwindles down from .5. Hence Q can be used to detect the loss of accuracy or precision of a manufacturing process, and thus it should serve as an important measure in quality assurance. This in fact is the reason why we refer to Q as a quality index in this article. In addition to studying the properties of Q, we provide an exact rank test for testing Q = .5 vs. Q < .5. This can be viewed as a multivariate analog of Wilcoxons rank sum test. The tests proposed here have power against location change and scale increase simultaneo...
Probability Theory and Related Fields | 1986
Shaw-Hwa Lo; Kesar Singh
SummaryThe product-limit estimator and its quantile process are represented as i.i.d. mean processes, with a remainder of ordern−3/4(logn)3/4 a.s. Corresponding bootstrap versions of these representations are given, which can help one visualize how the bootstrap procedure operates in this set up.
Journal of the American Statistical Association | 1997
Regina Y. Liu; Kesar Singh
Abstract In this article we propose some new notions of limiting P values for hypothesis testing. The limiting P value (LP) here not only provides the usual attractive interpretation of a P value as the strength in support of the null hypothesis coming from the observed data, but also has several advantages. First, it allows us to resample directly from the empirical distribution (in the bootstrap implementations), rather than from the estimated population distribution satisfying the null constraints. Second, it serves as a test statistic and as a P value simultaneously, and thus enables us to obtain test results directly without having to construct an explicit test statistic and then establish or approximate its sampling distribution. These are the two steps generally required in a standard testing procedure. Using bootstrap and the concept of data depth, we have provided LPs for a broad class of testing problems where the parameters of interest can be either finite or infinite dimensional. Some compute...
Journal of Multivariate Analysis | 1978
Gutti Jogesh Babu; Kesar Singh
Let {Xn} be a strictly stationary [phi]-mixing process with [Sigma]j=1[infinity] [phi]1/2(j)
Journal of The Royal Statistical Society Series B-statistical Methodology | 1997
Arthur B. Yeh; Kesar Singh
We propose and study the bootstrap confidence regions for multivariate parameters based on Tukey’s depth. The bootstrap is based on the normalized or Studentized statistic formed from an independent and identically distributed random sample obtained from some unknown distribution in Rq. The bootstrap points are deleted on the basis of Tukey’s depth until the desired confidence level is reached. The proposed confidence regions are shown to be second order balanced in the context discussed by Beran. We also study the asymptotic consistency of Tukey’s depth‐based bootstrap confidence regions. The applicability of the method proposed is demonstrated in a simulation study.
Journal of the American Statistical Association | 2011
Minge Xie; Kesar Singh; William E. Strawderman
This article develops a unifying framework, as well as robust meta-analysis approaches, for combining studies from independent sources. The device used in this combination is a confidence distribution (CD), which uses a distribution function, instead of a point (point estimator) or an interval (confidence interval), to estimate a parameter of interest. A CD function contains a wealth of information for inferences, and it is a useful device for combining studies from different sources. The proposed combining framework not only unifies most existing meta-analysis approaches, but also leads to development of new approaches. We illustrate in this article that this combining framework can include both the classical methods of combining p-values and modern model-based meta-analysis approaches. We also develop, under the unifying framework, two new robust meta-analysis approaches, with supporting asymptotic theory. In one approach each study size goes to infinity, and in the other approach the number of studies goes to infinity. Our theoretical development suggests that both these robust meta-analysis approaches have high breakdown points and are highly efficient for normal models. The new methodologies are applied to study-level data from publications on prophylactic use of lidocaine in heart attacks and a treatment of stomach ulcers. The robust methods performed well when data are contaminated and have realistic sample sizes and number of studies.
Journal of Multivariate Analysis | 1985
G.Jogesh Babu; Kesar Singh
The validity of the one-term Edgeworth expansion is proved for the multivariate mean of a random sample drawn without replacement under a limiting non-latticeness condition on the population. The theorem is applied to deduce the one-term expansion for the univariate statistics which can be expressed in a certain linear plus quadratic form. An application of the results to the theory of bootstrap is mentioned. A one-term expansion is also proved in the univariate lattice case.
arXiv: Statistics Theory | 2007
Kesar Singh; Minge Xie; William E. Strawderman
The notion of confidence distribution (CD), an entirely frequentist concept, is in essence a Neymanian interpretation of Fishers Fiducial distri- bution. It contains information related to every kind of frequentist inference. In this article, a CD is viewed as a distribution estimator of a parameter. This leads naturally to consideration of the information contained in CD, com- parison of CDs and optimal CDs, and connection of the CD concept to the (profile) likelihood function. A formal development of a multiparameter CD is also presented.
Statistics & Probability Letters | 1994
J. Cabrera; G. Maguluri; Kesar Singh
It is observed that the sample median for the even sample size n = 2m is superior to the sample median with n = 2m + 1. A slight modification in the definition of the sample medain for the odd sample size is suggested, which eliminates this odd property.
Journal of the American Statistical Association | 2009
Minge Xie; Kesar Singh; Cun-Hui Zhang
Frequentist confidence intervals for population ranks and their statistical justifications are not well established, even though there is a great need for such procedures in practice. How do we assign confidence bounds for the ranks of health care facilities, schools, and financial institutions based on data that do not clearly separate the performance of different entities apart? The commonly used bootstrap-based frequentist confidence intervals and Bayesian intervals for population ranks may not achieve the intended coverage probability in the frequentist sense, especially in the presence of unknown ties or “near ties” among the populations to be ranked. Given random samples from k populations, we propose confidence bounds for population ranking parameters and develop rigorous frequentist theory and nonstandard bootstrap inference for population ranks, which allow ties and near ties. In the process, a notion of modified population rank is introduced that appears quite suitable for dealing with the population ranking problem. The proposed methodology and theoretical results are illustrated through simulations and a real dataset from a health research study involving 79 Veterans Health Administration (VHA) facilities. The results are extended to general risk adjustment models.