Adolfo Hernandez
University of Exeter
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Adolfo Hernandez.
international conference of the ieee engineering in medicine and biology society | 2007
Vitaly Schetinin; Jonathan E. Fieldsend; Derek Partridge; Tim Coats; Wojtek J. Krzanowski; Richard M. Everson; Trevor C. Bailey; Adolfo Hernandez
Bayesian averaging (BA) over ensembles of decision models allows evaluation of the uncertainty of decisions that is of crucial importance for safety-critical applications such as medical diagnostics. The interpretability of the ensemble can also give useful information for experts responsible for making reliable decisions. For this reason, decision trees (DTs) are attractive decision models for experts. However, BA over such models makes an ensemble of DTs uninterpretable. In this paper, we present a new approach to probabilistic interpretation of Bayesian DT ensembles. This approach is based on the quantitative evaluation of uncertainty of the DTs, and allows experts to find a DT that provides a high predictive accuracy and confident outcomes. To make the BA over DTs feasible in our experiments, we use a Markov Chain Monte Carlo technique with a reversible jump extension. The results obtained from clinical data show that in terms of predictive accuracy, the proposed method outperforms the maximum a posteriori (MAP) method that has been suggested for interpretation of DT ensembles
Structural Equation Modeling | 2011
Alberto Maydeu-Olivares; Li Cai; Adolfo Hernandez
Linear factor analysis (FA) models can be reliably tested using test statistics based on residual covariances. We show that the same statistics can be used to reliably test the fit of item response theory (IRT) models for ordinal data (under some conditions). Hence, the fit of an FA model and of an IRT model to the same data set can now be compared. When applied to a binary data set, our experience suggests that IRT and FA models yield similar fits. However, when the data are polytomous ordinal, IRT models yield a better fit because they involve a higher number of parameters. But when fit is assessed using the root mean square error of approximation (RMSEA), similar fits are obtained again. We explain why. These test statistics have little power to distinguish between FA and IRT models; they are unable to detect that linear FA is misspecified when applied to ordinal data generated under an IRT model.
Advanced Data Analysis and Classification | 2007
Nor Idayu Mahat; Wojtek J. Krzanowski; Adolfo Hernandez
Non-parametric smoothing of the location model is a potential basis for discriminating between groups of objects using mixtures of continuous and categorical variables simultaneously. However, it may lead to unreliable estimates of parameters when too many variables are involved. This paper proposes a method for performing variable selection on the basis of distance between groups as measured by smoothed Kullback–Leibler divergence. Searching strategies using forward, backward and stepwise selections are outlined, and corresponding stopping rules derived from asymptotic distributional results are proposed. Results from a Monte Carlo study demonstrate the feasibility of the method. Examples on real data show that the method is generally competitive with, and sometimes is better than, other existing classification methods.
Multivariate Behavioral Research | 2006
Albert Maydeu-Olivares; Adolfo Hernandez; Roderick P. McDonald
We introduce a multidimensional item response theory (IRT) model for binary data based on a proximity response mechanism. Under the model, a respondent at the mode of the item response function (IRF) endorses the item with probability one. The mode of the IRF is the ideal point, or in the multidimensional case, an ideal hyperplane. The model yields closed form expressions for the cell probabilities. We estimate and test the goodness of fit of the model using only information contained in the univariate and bivariate moments of the data. Also, we pit the new model against the multidimensional normal ogive model estimated using NOHARM in four applications involving (a) attitudes toward censorship, (b) satisfaction with life, (c) attitudes of morality and equality, and (d) political efficacy. The normal PDF model is not invariant to simple operations such as reverse scoring. Thus, when there is no natural category to be modeled, as in many personality applications, it should be fit separately with and without reverse scoring for comparisons.
Journal of Computational and Graphical Statistics | 2005
Adolfo Hernandez; Santiago Velilla
This article develops a dimension-reduction method in kernel discriminant analysis, based on a general concept of separation of populations. The ideas we present lead to a characterization of the central subspace that does not impose restrictions on the marginal distribution of the feature vector. We also give a new procedure for estimating relevant directions in the central subspace. Comparisons to other procedures are studied and examples of application are discussed.
Electronic Journal of Statistics | 2014
Pedro Delicado; Adolfo Hernandez; Gábor Lugosi
Given n independent, identically distributed random vectors in R-d, drawn from a common density f, one wishes to find out whether the support of f is convex or not. In this paper we describe a decision rule which decides correctly for sufficiently largen, with probability 1, whenever f is bounded away from zero in its compact support. We also show that the assumption of boundedness is necessary. The rule is based on a statistic that is a second-orde U-statistic with a random kernel. Moreover, we suggest a way of approximating the distribution of the statistic under the hypothesis of convexity of the support. The performance of the proposed method is illustrated on simulated data sets. As an example of its potential statistical implications, the decision rule is used to automatically choose the tuning parameter of ISOMAP, a nonlinear dimensionality reduction method.
SSS | 2006
Derek Partridge; Trevor C. Bailey; Richard M. Everson; Jonathan E. Fieldsend; Adolfo Hernandez; Wojtek J. Krzanowski; Vitaly Schetinin
In this paper we demonstrate an application of data-driven software development in a Bayesian framework such that every computed result arises from within a context and so can be associated with a ‘confidence’ estimate whose validity is underpinned by Bayesian principles. This technique, which induces software modules from data samples (e.g., training a neural network), can be contrasted with more traditional, abstract specification driven, software development that has tended to compute a result and then added secondary computation to produce an associated ‘confidence’ measure.
intelligent data engineering and automated learning | 2004
Vitaly Schetinin; Derek Partridge; Wojtek J. Krzanowski; Richard M. Everson; Jonathan E. Fieldsend; Trevor C. Bailey; Adolfo Hernandez
In this paper we experimentally compare the classification uncertainty of the randomised Decision Tree (DT) ensemble technique and the Bayesian DT technique with a restarting strategy on a synthetic dataset as well as on some datasets commonly used in the machine learning community. For quantitative evaluation of classification uncertainty, we use an Uncertainty Envelope dealing with the class posterior distribution and a given confidence probability. Counting the classifier outcomes, this technique produces feasible evaluations of the classification uncertainty. Using this technique in our experiments, we found that the Bayesian DT technique is superior to the randomised DT ensemble technique.
arXiv: Artificial Intelligence | 2007
Vitaly Schetinin; Jonathan E. Fieldsend; Derek Partridge; Wojtek J. Krzanowski; Richard M. Everson; Trevor C. Bailey; Adolfo Hernandez
Summary. Bayesian averaging over classification models allows the uncertainty of classification outcomes to be evaluated, which is of crucial importance for making reliable decisions in applications such as financial in which risks have to be estimated. The uncertainty of classification is determined by a trade-off between the amount of data available for training, the diversity of a classifier ensemble and the required performance. The interpretability of classification models can also give useful information for experts responsible for making reliable classifications. For this reason Decision Trees (DTs) seem to be attractive classification models. The required diversity of the DT ensemble can be achieved by using the Bayesian model averaging all possible DTs. In practice, the Bayesian approach can be implemented on the base of a Markov Chain Monte Carlo (MCMC) technique of random sampling from the posterior distribution. For sampling large DTs, the MCMC method is extended by Reversible Jump technique which allows inducing DTs under given priors. For the case when the prior information on the DT size is unavailable, the sweeping technique defining the prior implicitly reveals a better performance. Within this chapter we explore the classification uncertainty of the Bayesian MCMC techniques on some datasets from the StatLog Repository and real financial data. The classification uncertainty is compared within an Uncertainty Envelope technique dealing with the class posterior distribution and a given confidence probability. This technique provides realistic estimates of the classification uncertainty which can be easily interpreted in statistical terms with the aim of risk evaluation.
SSS | 2006
Richard M. Everson; Jonathan E. Fieldsend; Trevor C. Bailey; Wojtek J. Krzanowski; Derek Partridge; Vitaly Schetinin; Adolfo Hernandez
The operation of many safety related systems is dependent upon a number of interacting parameters. Frequently these parameters must be ‘tuned’ to the particular operating environment to provide the best possible performance. We focus on the Short Term Conflict Alert (STCA) system, which warns of airspace infractions between aircraft, as an example of a safety related system that must raise an alert to dangerous situations, but should not raise false alarms. Current practice is to ‘tune’ by hand the many parameters governing the system in order to optimise the operating point in terms of the true positive and false positive rates, which are frequently associated with highly imbalanced costs.