Evdokia Xekalaki
Athens University of Economics and Business
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Evdokia Xekalaki.
Computational Statistics & Data Analysis | 2003
Dimitris Karlis; Evdokia Xekalaki
The EM algorithm is the standard tool for maximum likelihood estimation in finite mixture models. The main drawbacks of the EM algorithm are its slow convergence and the dependence of the solution on both the stopping criterion and the initial values used. The problems referring to slow convergence and the choice of a stopping criterion have been dealt with in literature and the present paper deals with the initial value problem for the EM algorithm. The aim of this paper is to compare several methods for choosing initial values for the EM algorithm in the case of finite mixtures as well as to propose some new methods based on modifications of existing ones. The cases of finite normal mixtures with common variance and finite Poisson mixtures are examined through a simulation study.
Archive | 2010
Evdokia Xekalaki; Stavros Degiannakis
ARCH models for the daily S&P500 log-returns are estimated, whereas the intraday prices comprise the dataset for an ARFIMAX model. Model’s forecasting performance is statistically superior when the CBOE’s VIX index is incorporated as an explanatory variable.
Communications in Statistics-theory and Methods | 1983
Evdokia Xekalaki
The problem of studying lifelength distributions in discrete time is considered for certain forms of hazard functions. A class of life distributions that consists of the geometric, the Waring and the negative hypergeometric distributions is shown to result when the hazard function is inversely proportional to some linear function of time.
The Statistician | 2000
Dimitris Karlis; Evdokia Xekalaki
The importance of the Poisson distribution among the discrete distributions has led to the development of several hypothesis tests, for testing whether data come from a Poisson distribution against a variety of alternative distributions. An extended simulation comparison is presented concerning the power of such tests. To overcome biases caused by the use of asymptotic results for the null distribution of several tests, an extended simulation was performed for calculating the required critical points for all the tests. The results can be useful to researchers as a guide to selecting the appropriate test from several alternatives that are available.
Quality Technology and Quantitative Management | 2004
Stavros Degiannakis; Evdokia Xekalaki
Abstract Autoregressive Conditional Heteroscedasticity (ARCH) models have successfully been employed in order to predict asset return volatility. Predicting volatility is of great importance in pricing financial derivatives, selecting portfolios, measuring and managing investment risk more accurately. In this paper, a number of univariate and multivariate ARCH models, their estimating methods and the characteristics of financial time series, which are captured by volatility models, are presented. The number of possible conditional volatility formulations is vast. Therefore, a systematic presentation of the models that have been considered in the ARCH literature can be useful in guiding one’s choice of a model for exploiting future volatility, with applications in financial markets.
Communications in Statistics-theory and Methods | 1986
John Panaretos; Evdokia Xekalaki
With the notion of success in a series of trials extended to refer to a run of like outcomes, several new distributions are obtained as the result of sampling from an urn without replacement or with additional replacements. In this context, the hypergeometric, negative hypergeometric, logarithmic series, generalized Waring, Polya and inverse Polya distributions are extended and their properties are studied
Journal of Statistical Computation and Simulation | 2002
Michael Perakis; Evdokia Xekalaki
In this paper a new process capability index is proposed, which is based on the proportion of conformance of the process and has several appealing features. This index is simple in its assessment and interpretation and is applicable to normally or non-normally distributed processes. Likewise, its value can be assessed for continuous or discrete processes, it can be used under either unilateral or bilateral tolerances and the assessment of confidence limits for its true value is not very involved, under specific distributional assumptions. Point estimators and confidence limits for this index are investigated, assuming two very common continuous distributions (normal and exponential).
Computational Statistics & Data Analysis | 1998
Dimitris Karlis; Evdokia Xekalaki
Minimum Hellinger distance (MHD) Estimation is an appealing method of estimation for discrete data as it works well in cases where the assumed model provides a poor fit to observed data and the maximum-likelihood (ML) method fails. Often, spurious observations that may cause problems to the ML method do not seem to affect the MHD method which in general performs better with such data. In this paper we derive MHD estimates for finite Poisson mixtures. The properties of these estimators are examined and a comparison is made of their performance relative to that of the ML estimators. MHD estimators are both efficient and robust. A numerical example involving data sets on environmental complaints is presented. An iterative algorithm that facilitates computation is provided. The algorithm always converges to a minimum, but several initial values are needed to ensure that the global minimum is obtained.
Journal of the Royal Statistical Society. Series A (General) | 1984
Evdokia Xekalaki
The univariate generalized Waring distribution was shown by Irwin (1968, 1975) to provide a useful accident model which enables one to split the variance into three additive components due to randomness, proneness and liability. The two non-random variance components, however, cannot be separately estimated. In this paper a way of tackling this problem is suggested by defining a bivariate extension of the generalized Waring distribution. Using this it is possible to obtain distinguishable estimates for the variance components and hence inferences can be made about the role of the underlying accident factors. The technique is illustrated by two examples.
Computational Statistics & Data Analysis | 2005
Evdokia Xekalaki; Stavros Degiannakis
The performance of an ARCH model selection algorithm based on the standardized prediction error criterion (SPEC) is evaluated. The evaluation of the algorithm is performed by comparing different volatility forecasts in option pricing through the simulation of an options market. Traders employing the SPEC model selection algorithm use the model with the lowest sum of squared standardized one-step-ahead prediction errors for obtaining their volatility forecast. The cumulative profits of the participants in pricing 1-day index straddle options always using variance forecasts obtained by GARCH, EGARCH and TARCH models are compared to those made by the participants using variance forecasts obtained by models suggested by the SPEC algorithm. The straddles are priced on the Standard and Poor 500 (S & P 500) index. It is concluded that traders, who base their selection of an ARCH model on the SPEC algorithm, achieve higher profits than those, who use only a single ARCH model. Moreover, the SPEC algorithm is compared with other criteria of model selection that measure the ability of the ARCH models to forecast the realized intra-day volatility. In this case too, the SPEC algorithm users achieve the highest returns. Thus, the SPEC model selection method appears to be a useful tool in selecting the appropriate model for estimating future volatility in pricing derivatives.