Bikas Kumar Sinha
University of Calcutta
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Bikas Kumar Sinha.
Journal of Multivariate Analysis | 1977
J.K. Ghosh; Bikas Kumar Sinha; Bimal K. Sinha
A probelm of J. Neyman (in Classical and Contagious Discrete Distributions (G. P. Patil, Ed.), 1965, pp. 4-14) regarding a characterization of positive and negative multinomial distributions is studied in this paper. Some properties of multivariate power series distributions in general which should be of independent interest are also derived.
Communications in Statistics-theory and Methods | 1976
Bimal Kumar Sinha; Bikas Kumar Sinha
This paper provides a partial solution to a problem posed by J. Neyman (1965) regarding the characterization of multivariate negative binomial distribution based on the properties of regression. It is shown that some of the properties of regression characterize the form of the nonsingular dispersion matrix of the parent distribution, which, interestingly enough, corresponds to only two types viz. those of positive and negative multivariate binomial distributions.
Calcutta Statistical Association Bulletin | 1970
Bikas Kumar Sinha
THE principle of in variance in the selection of inference procedures is wellknown [Lehmann, (1959)]. We shall consider here some linear inferential problems and their invariance under the symmetric group of permutations of (all or a subset of) the parameters occurring in such problems. It is first shown that the problem in general is not invariant underthis group. Next we characterise the ·particular form of the problem when it is invariant. Finally, these concepts are utilised to derive optimum designs for such invariant problems wrt various known optimality criteria, viz., A-, Dand £-optimality criteria [Kiefer (1958), (1959)]. More particularly, we shall be concerned with the following two problems: (a) derivation of linear inferential problems which are invariant under the symmetric group of permutations of {all or a subset of) the parameters; (b) derivation of optimum experimental designs for such invariant linear inferential problems wrt the various known optimality criteria listed above. We shall deal with the problem {a) in sections 2 to 4 while problem {b) will be treated in section 5.
Annals of the Institute of Statistical Mathematics | 1973
Bikas Kumar Sinha
Let /2 be the set of all possible values of some unknown parameter 8. An experiment X consists of the abstract random variable X, taking values in a space 99 on which there is defined a a-field ~.4 of subsets, together with a family {P,: 8 ~/2} of probability distributions of X over (99, J ) . Thus, we speak of performing the experiment X = {99, J , Po, 0 ~ [2} or, equivalently, of observing the random variable X. Suppose X = {99, ~ , Po, 0 6/2} and Y= {Q], 9 , Q0, 8 ~ [2} are two experiments with the same parameterspace /2. The concept of X being sufficient for Y has been precisely defined by Blackwell [1], [2] and redefined by DeGroot [4]. Roughly speaking, an experiment involving the observation of the random variable X is sufficient for another experiment involving the observation of the random variable Y if it is possible, from an observation on X and an auxiliary known randomisation, to generate a random variable with the same distribution as Y for all possible values of any unknown parameters. DeGroot, in his paper, explored the relevance of this concept of sufficiency in problems in which a fixed total number of observations must be allocated in some optimal fashion among various possible alternatives. DeGroot formulated the problem in a more general sett ing as follows: Let Z be a random variable (or random vector) with distribution P, depending on the unknown real-valued parameter 6, 0 ~/2. Suppose that it is possible to draw a random sample Z,, Z , , . , Z~ from the distribution P,, but that the values of ZI, Z , , . . . , Z~ cannot be observed. Instead, for each value of 4 (4=1, 2 , . . . , k), what can be observed are the values of ~, random variables Xq, X q , . . , X,,, which, conditionally on any given value Z,=z,, are independently distributed each with the common known conditional distribution function G(. I Z,). Since ZI, Z , , . . , Z~ are independent, the random vectors (X~,, . . . , X~,,),-.., (X~,,.. , Xk,~) are also independent. Specifically, the joint distribution
Calcutta Statistical Association Bulletin | 1993
Bikas Kumar Sinha; S. Sengupta
Considered is the set-up of sampling from a finite population of N (known) units divided into k (unknown) mutually exclusive and exhaustive categories, the ith category possessing N i , (unknown) units, 1 ⩽ i ⩽ k As and when sampling is terminated, some categories may remain undiscovered. A quantity of interest is the probability of discovering a new category when an additional draw is made. We address the problem of estimation of this unknown probability and also that of more general parametric functions. Simple random sampling (with/without replacement) is discussed.
Calcutta Statistical Association Bulletin | 1970
Bikas Kumar Sinha
Calcutta Statistical Association Bulletin | 1973
Bikas Kumar Sinha
Calcutta Statistical Association Bulletin | 1976
Basudeb Adhikary; Bikas Kumar Sinha
Annals of the Institute of Statistical Mathematics | 1975
Bikas Kumar Sinha; Bimal Kumar Sinha
Calcutta Statistical Association Bulletin | 1969
Bimal Kumar Sinha; Bikas Kumar Sinha