Alberto Luceño
University of Cantabria
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alberto Luceño.
Journal of Atmospheric and Oceanic Technology | 2007
Fernando J. Méndez; Melisa Menéndez; Medio Ambiente; Alberto Luceño; Inigo J. Losada
A statistical model to analyze different time scales of the variability of extreme high sea levels is presented. This model uses a time-dependent generalized extreme value (GEV) distribution to fit monthly maxima series and is applied to a large historical tidal gauge record (San Francisco, California). The model allows the identification and estimation of the effects of several time scales—such as seasonality, interdecadal variability, and secular trends—in the location, scale, and shape parameters of the probability distribution of extreme sea levels. The inclusion of seasonal effects explains a large amount of data variability, thereby allowing a more efficient estimation of the processes involved. Significant correlation with the Southern Oscillation index and the nodal cycle, as well as an increase of about 20% for the secular variability of the scale parameter have been detected for the particular dataset analyzed. Results show that the model is adequate for a complete analysis of seasonal-to-interannual sea level extremes providing time-dependent quantiles and confidence intervals.
Computational Statistics & Data Analysis | 1997
Ali S. Hadi; Alberto Luceño
Abstract The likelihood principle is one of the most important concepts in Statistics. Among other things, it is used to obtain point estimators for the parameters of probability distributions of random variables by maximizing the likelihood function. The resulting maximum likelihood estimators usually have desirable properties such as consistency and efficiency. However, these estimators are often not robust as it is known that there is usually a trade-off between robustness and efficiency; the more robust an estimator is the less efficient it may be when the data come from a Gaussian distribution. In this paper we investigate how the estimators change when the likelihood function is replaced by a trimmed version of it. The idea here is to trim the likelihood function rather than directly trim the data. Because the likelihood is scalar-valued, it is always possible to order and trim univariate as well as multivariate observations according to their contributions to the likelihood function. The degree of trimming depends on some parameters to be specified by the analyst. We show how this trimmed likelihood principle produces many of the existing estimators (e.g., maximum likelihood, least squares, least trimmed squares, least median of squares, and minimum volume ellipsoid estimators) as special cases. Since the resulting estimators may be very robust, they can be used for example for outliers detection. In some cases the estimators can be obtained in closed form. In other cases they may require numerical solutions. In cases where the estimators cannot be obtained in closed forms, we provide several algorithms for computing the estimates. The method and the algorithms are illustrated by several examples of both discrete and continuous distributions.
Computational Statistics & Data Analysis | 2006
Alberto Luceño
Some of the most powerful techniques currently available to test the goodness of fit of a hypothesized continuous cumulative distribution function (CDF) use statistics based on the empirical distribution function (EDF), such as those of Kolmogorov, Cramer-von Mises and Anderson-Darling, among others. The use of EDF statistics was analyzed for estimation purposes. In this approach, maximum goodness-of-fit estimators (also called minimum distance estimators) of the parameters of the CDF can be obtained by minimizing any of the EDF statistics with respect to the unknown parameters. The results showed that there is no unique EDF statistic that can be considered most efficient for all situations. Consequently, the possibility of defining new EDF statistics is entertained; in particular, an Anderson-Darling statistic of degree two and one-sided Anderson-Darling statistics of degree one and two appear to be notable in some situations. The procedure is shown to be able to deal successfully with the estimation of the parameters of homogeneous and heterogeneous generalized Pareto distributions, even when maximum likelihood and other estimation methods fail.
Journal of Quality Technology | 1997
George E. P. Box; Alberto Luceño
This paper explains the nature and importance of proportional-integral control and shows how it may be adapted to statistical process control. The relation of this type of control to exponential smoothing, minimum mean squared error control, and optimal..
Technometrics | 2000
Alberto Luceño; Jaime Puig-Pey
This article provides a fast and accurate algorithm to compute the run-length probability distribution for cumulative sum charts to control process mean. This algorithm uses a fast and numerically stable recursive formula based on accurate Gaussian quadrature rules throughout the whole range of the computed run-length distribution and, therefore, improves the numerical efficiency and accuracy of existing methods. The algorithm may detect whether or not the geometric approximation is adequate and, when it is possible, it allows switching to the geometric recursion. The procedure may be applied not only to the normal distribution but also to nonsymmetric and long-tailed continuous distributions, some examples of which are provided. Methods to assess chart performance according to the run-length distribution, as well as some multivariate issues in statistical process control, are considered.
Computational Statistics & Data Analysis | 1999
Alberto Luceño
Cuscore charts have recently been proposed as a statistical process monitoring tool designed to cope with situations in which special kinds of signals are feared a priori because they are known to affect the particular system being monitored. This paper provides algorithms to compute average run lengths and corresponding run length probability distributions for cuscore charts to control a process mean. Paralleling standard cusum techniques, a handicap may be used to define the cuscore statistic. Three schemes are described depending on whether or not the signals and handicap are reinitialized every time the cuscore statistic reaches its zero limit. Hypothetical situations in which the signal that is being searched for might be different from the true signal may be easily dealt with by the algorithms. Similarly, although the handicap should usually be chosen proportional to the signal to be detected to improve chart performance, the algorithms may easily cope with more general handicaps.
Technometrics | 1998
Alberto Luceño
Discrete dead-band adjustment schemes are often analyzed assuming that the disturbance may be approximately represented by a nonstationary integrated moving average (IMA) model. Sometimes, however, stationary autoregressive moving average (ARMA) models have been used for the same purpose. This article shows (a) that the IMA model leads to a much easier analysis, (b) that almost exactly the same average adjustment intervals (AAIs) and mean squared deviations (MSDs) are obtained under both disturbance models in the region of interest of the action limits, (c) that for wider action limits the ARMA disturbance overestimates the AA1 and underestimates the MSD with respect to the results provided by the IMA disturbance, and (d) that the IMA disturbance model is robust against model misspecification but the ARMA model is not.
Communications in Statistics - Simulation and Computation | 1996
Alberto Luceño
A well-know process capability index is slightly modified in this article to provide a new measure of process capability which takes account of the process location and variability, and for which point estimator and confidence intervals do exist that are insensitive to departures from the assumption of normal variability. Two examples of applications based on real data are presented.
Journal of Quality Technology | 1995
Alberto Luceño
The exponentially weighted moving average (EWMA) of past data is frequently used in process control applications. In engineering process control, the mean level of the quality characteristic is assumed to wander over time. If the integrated moving avera..
Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences | 2006
Alberto Luceño; Melisa Menéndez; Fernando J. Méndez
The term ‘extreme ocean climate estimation’ refers to the assessment of the statistical distribution of extreme oceanographical geophysical variables. Components of the ocean climate are variables, such as the storm surge, wind velocity and significant wave height. Important characteristics of extreme ocean climate are the frequencies of the exceedances of ocean climate variables over selected thresholds. Assuming that exceedances are statistically independent of each other, their frequencies can be estimated using non-homogeneous Poisson processes. However, exceedances often exhibit temporal dependency because of the tendency of storms to gather in clusters. We assess the effect of these dependencies on the estimation of the rate of occurrence of extreme events. Using a database built under the HIPOCAS European project, which covers the Western Mediterranean Sea, we compare the performance of the non-homogeneous Poisson process approach versus a new model that allows for temporal dependency. We show that the latter outperforms the former in terms of the resulting goodness of fit and significance of the parameters involved.