Joan del Castillo
Autonomous University of Barcelona
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Joan del Castillo.
Annals of the Institute of Statistical Mathematics | 1998
Joan del Castillo; Marta Pérez-Casany
The main goal of this paper is to introduce new exponential families, that come from the concept of weighted distribution, that include and generalize the Poisson distribution. In these families there are distributions with index of dispersion greater than, equal to or smaller than one. This property makes them suitable to fit discrete data in overdispersion or underdispersion situations. We study the statistical properties of the families and we provide a useful interpretation of the parameters. Two classical examples are considered in order to compare the fits with some other distributions. To obtain the fits with the new family, the study of the profile log-likelihood is required.
Journal of the American Statistical Association | 1999
Joan del Castillo; Pedro Puig
Abstract We show that the likelihood ratio test of exponentiality against singly truncated normal alternatives is the uniformly most powerful unbiased test and can be expressed in terms of the sampling coefficient of variation. This test is closely related to Greenwoods statistic for testing departures from the uniform distribution. We provide a way to approximate the critical points of the test, using saddlepoint methods, that gives a high degree of accuracy.
Annals of the Institute of Statistical Mathematics | 1994
Joan del Castillo
This paper is concerned with the maximum likelihood estimation problem for the singly truncated normal family of distributions. Necessary and suficient conditions, in terms of the coefficient of variation, are provided in order to obtain a solution to the likelihood equations. Furthermore, the maximum likelihood estimator is obtained as a limit case when the likelihood equation has no solution.
Annals of the Institute of Statistical Mathematics | 1997
Joan del Castillo; Pedro Puig
This paper provides necessary and sufficient conditions for a solution to likelihood equations for an exponential family of distributions, which includes Gamma, Rayleigh and singly truncated normal distributions. Furthermore, the maximum likelihood estimator is obtained as a limit case when the equations have no solution. These results provide a way to test departures from Rayleigh and singly truncated normal distributions using the likelihood ratio test. A new easy way to test departures from a Gamma distribution is also introduced.
Computational Statistics & Data Analysis | 2015
Joan del Castillo; Isabel Serra
A new methodological approach that enables the use of the maximum likelihood method in the Generalized Pareto Distribution is presented. Thus several models for the same data can be compared under Akaike and Bayesian information criteria. The view is based on a detailed theoretical study of the Generalized Pareto Distribution submodels with compact support.
Journal of the American Statistical Association | 1999
Joan del Castillo; Pedro Puig
Abstract For a two-parameter exponential model with increasing failure rate (IFR) or decreasing failure rate (DFR) distributions necessary and sufficient conditions of the existence of a solution of the likelihood equations are given. Also, all of the scale-invariant two-parameter statistical models closed by raising to a power and by exponential tilting are introduced. The conditions of existence of a solution of the likelihood equations are studied for these invariant models, and the models are applied to obtain some uniformly most powerful unbiased tests of exponentially against alternatives in these models.
Statistical Modelling | 2008
Joan del Castillo; Youngjo Lee
We propose a multivariate volatility model for the behaviour of eight international equity indices. We show that many volatility models with heavy tails in financial work can be viewed as the GLM class of models with random effects in the dispersion. Hence, the h-likelihood approach, which provides efficient and simpler algorithms for GLM class, can be used as an estimation method for models used in finance. A comparison of the h-likelihood estimators with the ML estimators is made and its relative merits are discussed.
ACM Transactions on Design Automation of Electronic Systems | 2017
Jaume Abella; Maria Padilla; Joan del Castillo; Francisco J. Cazorla
Extreme Value Theory (EVT) has been historically used in domains such as finance and hydrology to model worst-case events (e.g., major stock market incidences). EVT takes as input a sample of the distribution of the variable to model and fits the tail of that sample to either the Generalised Extreme Value (GEV) or the Generalised Pareto Distribution (GPD). Recently, EVT has become popular in real-time systems to derive worst-case execution time (WCET) estimates of programs. However, the application of EVT is not straightforward and requires a detailed analysis of, and customisation for, the particular problem at hand. In this article, we tailor the application of EVT to timing analysis. To that end, (1) we analyse the response time of different hardware resources (e.g., cache memories) and identify those that may lead to radically different types of execution time distributions. (2) We show that one of these distributions, known as mixture distribution, causes problems in the use of EVT. In particular, mixture distributions challenge not only properly selecting GEV/GPD parameters (i.e., location, scale and shape) but also determining the size of the sample to ensure that enough tail values are passed to EVT and that only tail values are used by EVT to fit GEV/GPD. Failing to select these parameters has a negative impact on the quality of the derived WCET estimates. We tackle these problems, by (3) proposing Measurement-Based Probabilistic Timing Analysis using the Coefficient of Variation (MBPTA-CV), a new mixture-distribution aware, WCET-suited MBPTA method that builds on recent EVT developments in other fields (e.g., finance) to automatically select the distribution parameters that best fit the maxima of the observed execution times. Our results on a simulation environment and a real board show that MBPTA-CV produces high-quality WCET estimates.
Computational Statistics & Data Analysis | 2011
Woojoo Lee; Johan Lim; Youngjo Lee; Joan del Castillo
Many volatility models used in financial research belong to a class of hierarchical generalized linear models with random effects in the dispersion. Therefore, the hierarchical-likelihood (h-likelihood) approach can be used. However, the dimension of the Hessian matrix is often large, so techniques of sparse matrix computation are useful to speed up the procedure of computing the inverse matrix. Using numerical studies we show that the h-likelihood approach gives better long-term prediction for volatility than the existing MCMC method, while the MCMC method gives better short-term prediction. We show that the h-likelihood approach gives comparable estimations of fixed parameters to those of existing methods.
Journal of Statistical Planning and Inference | 2000
Esteban Vegas; Joan del Castillo; Jordi Ocaña
The statistical properties of a variance-reduction technique, applicable to simulations with dichotomous response variables, are examined from the standpoint of exponential models, that is, distribution families whose log-likelihood is a linear function of a sufficient statistic of fixed dimension. It is established that this variance-reduction technique induces some distortion that is explainable in terms of the statistical curvature of the resulting exponential model. The curvature concept used here is a multiparametric generalization of Efrons definition. It is calculated explicitly and its relation to the amount of variance reduction and to the asymptotic distribution of the relevant statistics is discussed. It is concluded that Efronss criteria for low curvature (associated with nice statistical properties) are valid in this context and generally met for the usual sample sizes in simulation (some thousands of replicates).