Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gabriel Huerta is active.

Publication


Featured researches published by Gabriel Huerta.


Journal of Climate | 2008

Error Reduction and Convergence in Climate Prediction

Charles S. Jackson; Mrinal K. Sen; Gabriel Huerta; Yi Deng; Kenneth P. Bowman

Abstract Although climate models have steadily improved their ability to reproduce the observed climate, over the years there has been little change to the wide range of sensitivities exhibited by different models to a doubling of atmospheric CO2 concentrations. Stochastic optimization is used to mimic how six independent climate model development efforts might use the same atmospheric general circulation model, set of observational constraints, and model skill criteria to choose different settings for parameters thought to be important sources of uncertainty related to clouds and convection. Each optimized model improved its skill with respect to observations selected as targets of model development. Of particular note were the improvements seen in reproducing observed extreme rainfall rates over the tropical Pacific, which was not specifically targeted during the optimization process. As compared to the default model sensitivity of 2.4°C, the ensemble of optimized model configurations had a larger and n...


Environmental and Ecological Statistics | 2007

Time-varying models for extreme values

Gabriel Huerta; Bruno Sansó

We propose a new approach for modeling extreme values that are measured in time and space. First we assume that the observations follow a Generalized Extreme Value (GEV) distribution for which the location, scale or shape parameters define the space–time structure. The temporal component is defined through a Dynamic Linear Model (DLM) or state space representation that allows to estimate the trend or seasonality of the data in time. The spatial element is imposed through the evolution matrix of the DLM where we adopt a process convolution form. We show how to produce temporal and spatial estimates of our model via customized Markov Chain Monte Carlo (MCMC) simulation. We illustrate our methodology with extreme values of ozone levels produced daily in the metropolitan area of Mexico City and with rainfall extremes measured at the Caribbean coast of Venezuela.


Bayesian Analysis | 2008

Computational methods for parameter estimation in climate models

Alejandro Villagran; Gabriel Huerta; Charles S. Jackson; Mrinal K. Sen

Intensive computational methods have been used by Earth scientists in a wide range of problems in data inversion and uncertainty quantication such as earthquake epicenter location and climate projections. To quantify the uncer- tainties resulting from a range of plausible model congurations it is necessary to estimate a multidimensional probability distribution. The computational cost of estimating these distributions for geoscience applications is impractical using traditional methods such as Metropolis/Gibbs algorithms as simulation costs limit the number of experiments that can be obtained reasonably. Several alternate sampling strategies have been proposed that could improve on the sampling e- ciency including Multiple Very Fast Simulated Annealing (MVFSA) and Adaptive Metropolis algorithms. The performance of these proposed sampling strategies are evaluated with a surrogate climate model that is able to approximate the noise and response behavior of a realistic atmospheric general circulation model (AGCM). The surrogate model is fast enough that its evaluation can be embed- ded in these Monte Carlo algorithms. We show that adaptive methods can be superior to MVFSA to approximate the known posterior distribution with fewer forward evaluations. However the adaptive methods can also be limited by inad- equate sample mixing. The Single Component and Delayed Rejection Adaptive Metropolis algorithms were found to resolve these limitations, although challenges remain to approximating multi-modal distributions. The results show that these advanced methods of statistical inference can provide practical solutions to the cli- mate model calibration problem and challenges in quantifying climate projection uncertainties. The computational methods would also be useful to problems out- side climate prediction, particularly those where sampling is limited by availability of computational resources.


Journal of Applied Statistics | 2005

Multivariate Bayes Wavelet shrinkage and applications

Gabriel Huerta

Abstract In recent years, wavelet shrinkage has become a very appealing method for data de-noising and density function estimation. In particular, Bayesian modelling via hierarchical priors has introduced novel approaches for Wavelet analysis that had become very popular, and are very competitive with standard hard or soft thresholding rules. In this sense, this paper proposes a hierarchical prior that is elicited on the model parameters describing the wavelet coefficients after applying a Discrete Wavelet Transformation (DWT). In difference to other approaches, the prior proposes a multivariate Normal distribution with a covariance matrix that allows for correlations among Wavelet coefficients corresponding to the same level of detail. In addition, an extra scale parameter is incorporated that permits an additional shrinkage level over the coefficients. The posterior distribution for this shrinkage procedure is not available in closed form but it is easily sampled through Markov chain Monte Carlo (MCMC) methods. Applications on a set of test signals and two noisy signals are presented.


Computational Statistics & Data Analysis | 2006

Multivariate time series modeling and classification via hierarchical VAR mixtures

Raquel Prado; Francisco J. Molina; Gabriel Huerta

A novel class of models for multivariate time series is presented. We consider hierarchical mixture-of-expert (HME) models in which the experts, or building blocks of the model, are vector autoregressions (VAR). It is assumed that the VAR-HME model partitions the covariate space, specifically including time as a covariate, into overlapping regions called overlays. In each overlay a given number of VAR experts compete with each other so that the most suitable one for the overlay is favored by a large weight. The weights have a particular parametric form that allows the modeler to include relevant covariates. Estimation of the model parameters is achieved via the EM (expectation-maximization) algorithm. A new algorithm to select the optimal number of overlays, the number of VAR models and the model orders of the VARs that define a particular VAR-HME model configuration, is also developed. The algorithm uses the Bayesian information criterion (BIC) as an optimality criterion. Issues of model checking and inference of latent structure in multiple time series are investigated. The new methodology is illustrated by analyzing a synthetic data set and a 7-channel electroencephalogram data set.


Atmosfera | 2016

A study of trends for Mexico City ozone extremes: 2001-2014

Sara Rodríguez; Gabriel Huerta; Hortensia Reyes

We analyze trends of high values of tropospheric ozone over Mexico City based on data corresponding to the years 2001-2014. The data consists of monthly maxima ozone concentrations based on 29 monitoring stations. Due to the large presence of missing data, we consider the monthly maxima based on five well identified geographical zones. We assess time trends based on a statistical model that assumes that these observations follow an extreme value distribution, where the location parameter changes in time accordingly to a regression model. In addition, we use Bayesian methods to estimate simultaneously a zonal and an overall time-trend parameter along with the shape and scale parameters of the Generalized Extreme Value distribution. We compare our results to a model that is based on a normal distribution. Our analyses show some evidence of decaying ozone levels for the monthly maxima during the period of study.


Archive | 2006

Bayesian Inference on Mixture-of-Experts for Estimation of Stochastic Volatility

Alejandro Villagran; Gabriel Huerta

The problem of model mixing in time series, for which the interest lies in the estimation of stochastic volatility, is addressed using the approach known as Mixture-of-Experts (ME). Specifically, this work proposes a ME model where the experts are defined through ARCH, GARCH and EGARCH structures. Estimates of the predictive distribution of volatilities are obtained using a full Bayesian approach. The methodology is illustrated with an analysis of a section of US dollar/German mark exchange rates and a study of the Mexican stock market index using the Dow Jones Industrial index as a covariate.


Communications in Statistics - Simulation and Computation | 2016

Non-parametric Sampling Approximation via Voronoi Tessellations

Alejandro Villagran; Gabriel Huerta; Marina Vannucci; Charles S. Jackson; Alvaro Nosedal

In this article we propose a novel non-parametric sampling approach to estimate posterior distributions from parameters of interest. Starting from an initial sample over the parameter space, this method makes use of this initial information to form a geometrical structure known as Voronoi tessellation over the whole parameter space. This rough approximation to the posterior distribution provides a way to generate new points from the posterior distribution without any additional costly model evaluations. By using a traditional Markov Chain Monte Carlo (MCMC) over the non-parametric tessellation, the initial approximate distribution is refined sequentially. We applied this method to a couple of climate models to show that this hybrid scheme successfully approximates the posterior distribution of the model parameters.


Communications in Statistics: Case Studies, Data Analysis and Applications | 2015

A study of trends for Mexico City ozone extremes

Sara Rodríguez; Gabriel Huerta; Hortensia Reyes

ABSTRACT The goal of this article is to analyze trends of high values of tropospheric ozone over Mexico City based on the Generalized Extreme Value (GEV) distribution and modeling from a Bayesian perspective. Our study of trends is based on data corresponding to three consecutive months: April, May, and June and for the years 2007–2013 for which we computed the maximum ozone concentrations per 72 h for 16 monitoring stations. First, we perform a station-by-station analysis of the data in which the location parameter of the GEV distribution includes a time-trend regression component. Additionally, we consider a Bayesian hierarchical model to study jointly the data from all the stations. For this model, the trend and intercept parameters for each station are assumed to have a spatial behavior centered on a common trend and intercept. Our analyses show that for a few stations there is some evidence of decaying levels, however, the global trend estimate confirms that there is no significant change during the period of study.


Journal of Pediatric Nursing | 2017

Social Determinants of Overweight and Obesity Rates by Elementary School in a Predominantly Hispanic School District

Richard Santos; Gabriel Huerta; Menuka Karki; Andrea Cantarero

Objective This study analyzes the social determinants associated with the overweight or obesity prevalence of 85 elementary schools during the 2010–11 academic year in a predominantly Hispanic school district. Methods A binomial logistic regression is used to analyze the aggregate overweight or obesity rate of a school by the percent of Hispanic students in each school, selected school and neighborhood characteristics, and its geographical location. Results The proportion of Hispanic enrollment more readily explains a schools aggregate overweight or obesity rate than social determinants or spatial location. Number of fast food establishments and the academic ranking of a school appear to slightly impact the aggregate prevalence rate. Spatial location of school is not a significant factor, controlling for other determinants. Conclusions An elementary schools overall overweight or obesity rate provides a valuable health indicator to study the social determinants of obesity among Hispanics and other students within a local neighborhood. HighlightsA schools overall overweight or obesity rate is a useful health indicator.Social determinants are used to explain the obesity variance between schools.A major social determinant of obesity is the proportion of Hispanics in a school.Number of fast foods and a schools academic ranking showed slight impact.Spatial location of school is not significant, controlling for other determinants.

Collaboration


Dive into the Gabriel Huerta's collaboration.

Top Co-Authors

Avatar

Charles S. Jackson

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bruno Sansó

University of California

View shared research outputs
Top Co-Authors

Avatar

Lauren Hund

University of New Mexico

View shared research outputs
Top Co-Authors

Avatar

Raquel Prado

University of California

View shared research outputs
Top Co-Authors

Avatar

Hortensia Reyes

Benemérita Universidad Autónoma de Puebla

View shared research outputs
Top Co-Authors

Avatar

Sara Rodríguez

Benemérita Universidad Autónoma de Puebla

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge