Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where María del Carmen Vargas-González is active.

Publication


Featured researches published by María del Carmen Vargas-González.


Neurocomputing | 2011

Dynamic topology learning with the probabilistic self-organizing graph

Ezequiel López-Rubio; Esteban José Palomo-Ferrer; Juan Miguel Ortiz-de-Lazcano-Lobato; María del Carmen Vargas-González

Abstract Self-organizing neural networks are usually focused on prototype learning, while the topology is held fixed during the learning process. Here a method to adapt the topology of the network so that it reflects the internal structure of the input distribution is proposed. This leads to a self-organizing graph, where each unit is a mixture component of a mixture of Gaussians (MoG). The corresponding update equations are derived from the stochastic approximation framework. This approach combines the advantages of probabilistic mixtures with those of self-organization. Experimental results are presented to show the self-organization ability of our proposal and its performance when used with multivariate datasets in classification and image segmentation tasks.


ambient intelligence | 2009

Probabilistic Self-Organizing Graphs

Ezequiel López-Rubio; Juan Miguel Ortiz-de-Lazcano-Lobato; María del Carmen Vargas-González

Self-organizing neural networks are usually focused on prototype learning, while the topology is held fixed during the learning process. Here we propose a method to adapt the topology of the network so that it reflects the internal structure of the input distribution. This leads to a self-organizing graph, where each unit is a mixture component of a Mixture of Gaussians (MoG). The corresponding update equations are derived from the stochastic approximation framework. Experimental results are presented to show the self-organization ability of our proposal and its performance when used with multivariate datasets.


international work-conference on artificial and natural neural networks | 2007

Self-organization of probabilistic PCA models

Ezequiel López-Rubio; Juan Miguel Ortiz-de-Lazcano-Lobato; Domingo López-Rodríguez; María del Carmen Vargas-González

We present a new neural model, which extends Kohonens self-organizing map (SOM) by performing a Probabilistic Principal Components Analysis (PPCA) at each neuron. Several self-organizing maps have been proposed in the literature to capture the local principal subspaces, but our approach offers a probabilistic model at each neuron while it has linear complexity on the dimensionality of the input space. This allows to process very high dimensional data to obtain reliable estimations of the local probability densities which are based on the PPCA framework. Experimental results are presented, which show the map formation capabilities of the proposal with high dimensional data.


ambient intelligence | 2009

Nonparametric Location Estimation for Probability Density Function Learning

Ezequiel López-Rubio; Juan Miguel Ortiz-de-Lazcano-Lobato; María del Carmen Vargas-González

We present a method to estimate the probability density function of multivariate distributions. Standard Parzen window approaches use the sample mean and the sample covariance matrix around every input vector. This choice yields poor robustness for real input datasets. We propose to use the L1-median to estimate the local mean and covariance matrix with a low sensitivity to outliers. In addition to this, a smoothing phase is considered, which improves the estimation by integrating the information from several local clusters. Hence, a specific mixture component is learned for each local cluster. This leads to outperform other proposals where the local kernel is not as robust and/or there are no smoothing strategies, like the manifold Parzen windows.


international conference on artificial neural networks | 2008

Robust Nonparametric Probability Density Estimation by Soft Clustering

Ezequiel López-Rubio; Juan Miguel Ortiz-de-Lazcano-Lobato; Domingo López-Rodríguez; María del Carmen Vargas-González

A method to estimate the probability density function of multivariate distributions is presented. The classical Parzen window approach builds a spherical Gaussian density around every input sample. This choice of the kernel density yields poor robustness for real input datasets. We use multivariate Student-t distributions in order to improve the adaptation capability of the model. Our method has a first stage where hard neighbourhoods are determined for every sample. Then soft clusters are considered to merge the information coming from several hard neighbourhoods. Hence, a specific mixture component is learned for each soft cluster. This leads to outperform other proposals where the local kernel is not as robust and/or there are no smoothing strategies, like the manifold Parzen windows.


international work-conference on artificial and natural neural networks | 2007

Automatic model selection for probabilistic PCA

Ezequiel López-Rubio; Juan Miguel Ortiz-de-Lazcano-Lobato; Domingo López-Rodríguez; María del Carmen Vargas-González

The Mixture of Probabilistic Principal Components Analyzers (MPPCA) is a multivariate analysis technique which defines a Gaussian probabilistic model at each unit. The number of units and principal directions in each unit is not learned in the original approach. Variational Bayesian approaches have been proposed for this purpose, which rely on assumptions on the input distribution and/or approximations of certain statistics. Here we present a different way to solve this problem, where cross-validation is used to guide the search for an optimal model selection. This allows to learn the model architecture without the need of any assumptions other than those of the basic PPCA framework. Experimental results are presented, which show the probability density estimation capabilities of the proposal with high dimensional data.


international conference on artificial neural networks | 2007

Soft clustering for nonparametric probability density function estimation

Ezequiel López-Rubio; Juan Miguel Ortiz-de-Lazcano-Lobato; Domingo López-Rodríguez; María del Carmen Vargas-González

We present a nonparametric probability density estimation model. The classical Parzen window approach builds a spherical Gaussian density around every input sample. Our method has a first stage where hard neighbourhoods are determined for every sample. Then soft clusters are considered to merge the information coming from several hard neighbourhoods. Our proposal estimates the local principal directions to yield a specific Gaussian mixture component for each soft cluster. This leads to outperform other proposals where local parameter selection is not allowed and/or there are no smoothing strategies, like the manifold Parzen windows.


international conference on artificial neural networks | 2006

Local selection of model parameters in probability density function estimation

Ezequiel López-Rubio; Juan Miguel Ortiz-de-Lazcano-Lobato; Domingo López-Rodríguez; Enrique Mérida-Casermeiro; María del Carmen Vargas-González

Here we present a novel probability density estimation model. The classical Parzen window approach builds a spherical Gaussian density around every input sample. Our proposal selects a Gaussian specifically tuned for each sample, with an automated estimation of the local intrinsic dimensionality of the embedded manifold and the local noise variance. This leads to outperform other proposals where local parameter selection is not allowed, like the manifold Parzen windows.


international conference on artificial neural networks | 2005

Intrinsic dimensionality maps with the PCASOM

Ezequiel López-Rubio; Juan Miguel Ortiz-de-Lazcano-Lobato; María del Carmen Vargas-González; José Miguel López-Rubio

The PCASOM is a novel self-organizing neural model that performs Principal Components Analysis (PCA). It is also related to the ASSOM network, but its training equations are simpler. The PCASOM has the ability to learn self-organizing maps of the means and correlations of complex input distributions. Here we propose a method to extend this capability to build intrinsic dimensionality maps. These maps model the underlaying structure of the input. Experimental results are reported, which show the self-organizing map formation performed by the proposed network.


european conference on artificial intelligence | 2004

Dynamic selection of model parameters in principal components analysis neural networks

Ezequiel López-Rubio; Juan Miguel Ortiz-de-Lazcano-Lobato; María del Carmen Vargas-González; José Miguel López-Rubio

Collaboration


Dive into the María del Carmen Vargas-González's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge