Carmen Fernández
University of St Andrews
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Carmen Fernández.
Journal of Econometrics | 1997
Carmen Fernández; Jacek Osiewalski; Mark F. J. Steel
Abstract We consider a Bayesian analysis of the stochastic frontier model with composed error. Under a commonly used class of (partly) noninformative prior distributions, the existence of the posterior distribution and of posterior moments is examined. Viewing this model as a Normal linear regression model with regression parameters corresponding to both the frontier and the inefficiency terms, generates the insights used to derived results in a very wide framework. It is found that in pure cross-section models posterior inference is precluded under this ‘usual’ class of priors. Existence of a well-defined posterior distribution then crucially hinges upon the structure imposed on the inefficiency terms. Exploiting panel data naturally suggests the use of more structured models, where Bayesian inference can be conducted.
Journal of The Royal Statistical Society Series B-statistical Methodology | 2002
Carmen Fernández; Peter Green
The paper develops mixture models for spatially indexed data. We confine attention to the case of finite, typically irregular, patterns of points or regions with prescribed spatial relationships, and to problems where it is only the weights in the mixture that vary from one location to another. Our specific focus is on Poisson-distributed data, and applications in disease mapping. We work in a Bayesian framework, with the Poisson parameters drawn from gamma priors, and an unknown number of components. We propose two alternative models for spatially dependent weights, based on transformations of autoregressive Gaussian processes: in one (the logistic normal model), the mixture component labels are exchangeable; in the other (the grouped continuous model), they are ordered. Reversible jump Markov chain Monte Carlo algorithms for posterior inference are developed. Finally, the performances of both of these formulations are examined on synthetic data and real data on mortality from a rare disease. Copyright 2002 Royal Statistical Society.
Journal of the American Statistical Association | 2002
Carmen Fernández; Gary Koop; Mark F. J. Steel
Many production processes yield both good outputs and undesirable ones (e.g., pollutants). In this article we develop a generalization of a stochastic frontier model that is appropriate for such technologies. We discuss efficiency analysis and, in particular, define technical and environmental efficiency in the context of our model. We develop methods for carrying out Bayesian inference and apply them to a panel data set of Dutch dairy farms, where excess nitrogen production constitutes an important environmental problem.
Journal of Econometrics | 2000
Carmen Fernández; Gary Koop; Mark F. J. Steel
In this paper we develop Bayesian tools for estimating multi-output production frontiers in applications where only input and output data are available. Firm-specific inefficiency is measured relative to this frontier. Our work has important differences from the existing literature, which either assumes a classical econometric perspective with restrictive functional form assumptions, or a non-stochastic approach which directly estimates the output distance function. Bayesian inference is implemented using a Markov Chain Monte Carlo algorithm. A banking application shows the ease and practicality of our approach.
Econometric Theory | 2000
Carmen Fernández; Mark F. J. Steel
This paper considers a Bayesian analysis of the linear regression model under independent sampling from general scale mixtures of Normals. Using a common reference prior, we investigate the validity of Bayesian inference and the existence of posterior moments of the regression and scale parameters. We find that whereas existence of the posterior distribution does not depend on the choice of the design matrix or the mixing distribution, both of them can crucially intervene in the existence of posterior moments. We identify some useful characteristics that allow for an easy verification of the existence of a wide range of moments. In addition, we provide full characterizations under sampling from finite mixtures of Normals, Pearson VII or certain Modulated Normal distributions. For empirical applications, a numerical implementation based on the Gibbs sampler is recommended.
Journal of The Royal Statistical Society Series C-applied Statistics | 2002
Carmen Fernández; Eduardo Ley; Mark F. J. Steel
We model daily catches of fishing boats in the Grand Bank fishing grounds. We use data on catches per species for a number of vessels collected by the European Union in the context of the Northwest Atlantic Fisheries Organization. Many variables can be thought to influence the amount caught: a number of ship characteristics (such as the size of the ship, the fishing technique used, the mesh size of the nets, etc.), are obvious candidates, but one can also consider the season or the actual location of the catch. Our database leads to 28 possible regressors (arising from six continuous variables and four categorical variables, whose 22 levels are treated separately), resulting in a set of 177 million possible linear regression models for the log of catch. Zero observations are modelled separately through a probit model. Inference is based on Bayesian model averaging, using a Markov chain Monte Carlo approach. Particular attention is paid to prediction of catch for single and aggregated ships.
Journal of Business & Economic Statistics | 2004
Carmen Fernández; Carmelo J. León; Mark F. J. Steel; F. J. Vázquez-Polo
The general aim of a contingent valuation survey is to elicit the willingness to pay (WTP) of respondents for some (public) commodity without a clear market price. This could be a program to protect some environmental resource or, as in our application, the access to a recreational area of particular interest. In this context, we want to accommodate the possibility of zero WTP and we need to deal with the fact that observations arise as intervals for WTP, rather than point observations. We propose a flexible Bayesian statistical analysis of WTP as a function of characteristics of the respondents that formally incorporates this structure through a mixture model. We consider model uncertainty and pay particular attention to the predictive distribution of revenue if a certain entry price were asked. The latter is an important tool for deriving pricing policies.
Journal of the American Statistical Association | 1997
Carmen Fernández; Jacek Osiewalski; Mark F. J. Steel
Some classical inference procedures can be shown to be completely robust in theses classes of multivariate distributions. These findings are used in the practically relevant context of regression models. We present a robust bayesian analysis and indicate the links between classical and Bayesian results. In particular, for the regression model with i.i.d. errors up to a scale, a formal characterization is provided for both classical and Bayesian robustness results concerning inference on the regression parameters.
Test | 1998
Carmen Fernández; Mark F. J. Steel
The reference prior algorithm (Berger and Bernardo, 1992) is applied to location-scale models with any regular sampling density. A number of two-sample problems is analyzed in this general context, extending the difference, ratio and product of Normal means problems outside Normality, while explicitly considering possibly different sizes for each sample. Since the reference prior turns out to be improper in all cases, we examine existence of the resulting posterior distribution and its moments under sampling from scale mixtures of Normals. In the context of an empirical example, it is shown that a reference posterior analysis is numerically feasible and can display some sensitivity to the actual sampling distributions. This illustrates the practical importance of questioning the Normality assumption.
Archive | 1999
C. Glasbey; Donald A. Preece; P.J. Diggle; S.C. Pearce; Hans R. Künsch; Steven G. Gilmour; Carmen Fernández; Peter Green; Neil A. Butler; R. A. Bailey; R.L. Smith; Bernard W. Silverman; Christopher Jennison; G.A. Barnard; Bartlett; Nicky Best; K. Ickstadt; R.L. Wolpert; S. Byers; A.C. Davison; R.N. Edmondson; W.T. Federer; A. Gilmour; Brian R. Cullis; A. Smith; Arũnas P. Verbyla; M. Gumpertz; D.A. Harville; D.L. Zimmerman; S.N. MacEachern
The paper describes Bayesian analysis for agricultural field experiments, a topic that has received very little previous attention, despite a vast frequentist literature. Adoption of the Bayesian paradigm simplifies the interpretation of the results, especially in ranking and selection. Also, complex formulations can be analysed with comparative ease, by using Markov chain Monte Carlo methods. A key ingredient in the approach is the need for spatial representations of the unobserved fertility patterns. This is discussed in detail. Problems caused by outliers and by jumps in fertility are tackled via hierarchical t formulations that may find use in other contexts. The paper includes three analyses of variety trials for yield and one example involving binary data; none is entirely straightforward. Some comparisons with frequentist analyses are made.