K. Ferentinos
University of Ioannina
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by K. Ferentinos.
Communications in Statistics-theory and Methods | 1990
K. Zografos; K. Ferentinos; Takis Papaioannou
φ-divergence .statistics are obtained by either replacing both distributions involved in the argument of the φ -divergence measure by their sample estimates or replacing one distribution and considering the other as given. The sampling properties of estimated divergence-type measures are investigated. Approximate means and variances are derived and asymptotic distributions are obtained. Tests of goodness of fit of observed frequencies to expected ones and tests of equality of divergences based on two or more multinomial samples are constructed.
Information & Computation | 1981
K. Ferentinos; Takis Papaioannou
In this paper methods are presented for obtaining parametric measures of information from the non-parametric ones and from information matrices. The properties of these measures are examined. The one-dimensional parametric measures which are derived from the non-parametric are superior to Fishers information measure because they are free from regularity conditions. But if we impose the regularity conditions of the Fisherian theory of information, these measures become linear functions of Fishers measure.
Canadian Journal of Statistics-revue Canadienne De Statistique | 1986
K. Zografos; K. Ferentinos; Takis Papaioannou
The problem of loss of information due to the discretization of data and its estimate is studied for various measures of information. The results of Ghurye and Johnson (1981) are generalized and supplemented for the Csiszar and Renyi measures of information as well as for Fishers information matrix.
Journal of Statistical Planning and Inference | 1982
K. Ferentinos; Takis Papaioannou
Abstract We define measures of information contained in an experiment which are by-products of the parametric measures of Fisher, Vajda, Mathai and Boekee and the non-parametric measures of Bhattacharyya, Renyi, Matusita, Kagan and Csiszar. We use these measures to compare sufficient experiments according to Blackwells definition. In particular, we prove that if δ X and δ Y are two experiments and δ X ≥ δ Y then l X ≥ l y for all of the above measures.
Communications in Statistics-theory and Methods | 2005
Takis Papaioannou; K. Ferentinos
ABSTRACT Fishers information number is the second moment of the “score function” where the derivative is with respect to x rather than Θ. It is Fishers information for a location parameter, and also called shift-invariant Fisher information. In recent years, Fishers information number has been frequently used in several places regardless of parameters of the distribution or of their nature. Is this number a nominal, standard, and typical measure of information? The Fisher information number is examined in light of the properties of classical statistical information theory. It has some properties analogous to those of Fishers measure, but, in general, it does not have good properties if used as a measure of information when Θ is not a location parameter. Even in the case of location parameter, the regularity conditions must be satisfied. It does not possess the two fundamental properties of the mother information, namely the monotonicity and invariance under sufficient transformations. Thus the Fisher information number should not be used as a measure of information (except when Θ a location parameter). On the other hand, Fishers information number, as a characteristic of a distribution f(x), has other interesting properties. As a byproduct of its superadditivity property a new coefficient of association is introduced.
Annals of the Institute of Statistical Mathematics | 1989
K. Zografos; K. Ferentinos; Takis Papaioannou
In this paper we investigate the limiting behaviour of the measures of information due to Csiszár, Rényi and Fisher. Conditions for convergence of measures of information and for convergence of Radon-Nikodym derivatives are obtained. Our results extend the results of Kullback (1959,Information Theory and Statistics, Wiley, New York) and Kirmani (1971,Ann. Inst. Statist. Math.,23, 157–162).
Communications in Statistics-theory and Methods | 2006
K. Ferentinos; K. X. Karakostas
An interesting topic in mathematical statistics is that of constructing confidence intervals. Two types of intervals, both based on the method of pivotal quantity, are available: the Shortest Confidence Interval (SCI) and the Equal Tails Confidence Interval (ETCI). The aims of this article are: (i) to clarify and comment on methods of finding such intervals; (ii) to investigate the relationship between these types of intervals; (iii) to point out that confidence intervals with the shortest length do not always exist, even when the distribution of the pivotal quantity is symmetric; and finally, (iv) to give similar results when the Bayesian approach is used.
Information Sciences | 1996
Ch. Tsairidis; K. Ferentinos; Takis Papaioannou
Fisher and divergence type measures of information in the area of random censoring are introduced and compared with the measures of Hollander, Proschan, and Sconing. The basic properties of statistical information theory are established for these measures of information. The winners are the classical measures of information.
Metrika | 1994
K. Zografos; K. Ferentinos
Based on the Cramér-Rao inequality (in the multiparameter case) the lower bound of Fisher information matrix is achieved if and only if the underlying distribution is ther-parameter exponential family. This family and the lower bound of Fisher information matrix are characterized when some constraints in the form of expected values of some statistics are available. If we combine the previous results we can find the class of parametric functions and the corresponding UMVU estimators via Cramér-Rao inequality.
The American Statistician | 1990
K. Ferentinos
Abstract Statistical inferences for probability distributions involving truncation parameters have received recent attention in the literature. One aspect of these inferences is the question of shortest confidence intervals for parameters or parametric functions of these models. The topic is a classical one, and the approach follows the usual theory. In all literature treatments the authors consider specific models and derive confidence intervals (not necessarily shortest). All of these models can, however, be considered as special cases of a more general one. The use of this model enables one to obtain easily shortest confidence intervals and unify the different approaches. In addition, it provides a useful technique for classroom presentation of the topic.