Juan Miguel Ortiz-de-Lazcano-Lobato
University of Málaga
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Hotspot
Dive into the research topics where Juan Miguel Ortiz-de-Lazcano-Lobato is active.
Publication
Featured researches published by Juan Miguel Ortiz-de-Lazcano-Lobato.
IEEE Transactions on Neural Networks | 2009
Ezequiel López-Rubio; Juan Miguel Ortiz-de-Lazcano-Lobato; Domingo López-Rodríguez
In this paper, we present a probabilistic neural model, which extends Kohonens self-organizing map (SOM) by performing a probabilistic principal component analysis (PPCA) at each neuron. Several SOMs have been proposed in the literature to capture the local principal subspaces, but our approach offers a probabilistic model while it has a low complexity on the dimensionality of the input space. This allows to process very high-dimensional data to obtain reliable estimations of the probability densities which are based on the PPCA framework. Experimental results are presented, which show the map formation capabilities of the proposal with high-dimensional data, and its potential in image and video compression applications.
International Journal of Neural Systems | 2009
Ezequiel López-Rubio; Juan Miguel Ortiz-de-Lazcano-Lobato
We present a new neural model which extends the classical competitive learning (CL) by performing a Probabilistic Principal Components Analysis (PPCA) at each neuron. The model also has the ability to learn the number of basis vectors required to represent the principal directions of each cluster, so it overcomes a drawback of most local PCA models, where the dimensionality of a cluster must be fixed a priori. Experimental results are presented to show the performance of the network with multispectral image data.
Neural Computation | 2004
Ezequiel López-Rubio; Juan Miguel Ortiz-de-Lazcano-Lobato; José Muñoz-Pérez; José Antonio Gómez-Ruiz
We present a new neural model that extends the classical competitive learning by performing a principal components analysis (PCA) at each neuron. This model represents an improvement with respect to known local PCA methods, because it is not needed to present the entire data set to the network on each computing step. This allows a fast execution while retaining the dimensionality-reduction properties of the PCA. Furthermore, every neuron is able to modify its behavior to adapt to the local dimensionality of the input distribution. Hence, our model has a dimensionality estimation capability. The experimental results we present show the dimensionality-reduction capabilities of the model with multisensor images.
Neural Processing Letters | 2013
Rafael Marcos Luque-Baena; Juan Miguel Ortiz-de-Lazcano-Lobato; Ezequiel López-Rubio; Enrique Domínguez; Esteban J. Palomo
Tracking of moving objects in real situation is a challenging research issue, due to dynamic changes in objects or background appearance, illumination, shape and occlusions. In this paper, we deal with these difficulties by incorporating an adaptive feature weighting mechanism to the proposed growing competitive neural network for multiple objects tracking. The neural network takes advantage of the most relevant object features (information provided by the proposed adaptive feature weighting mechanism) in order to estimate the trajectories of the moving objects. The feature selection mechanism is based on a genetic algorithm, and the tracking algorithm is based on a growing competitive neural network where each unit is associated to each object in the scene. The proposed methods (object tracking and feature selection mechanism) are applied to detect the trajectories of moving vehicles in roads. Experimental results show the performance of the proposed system compared to the standard Kalman filter.
computer analysis of images and patterns | 2009
Rafael Marcos Luque; Juan Miguel Ortiz-de-Lazcano-Lobato; Ezequiel López-Rubio; Esteban J. Palomo
A Growing Competitive Neural Network system is presented as a precise method to track moving objects for video-surveillance. The number of neurons in this neural model can be automatically increased or decreased in order to get a one-to-one association between objects currently in the scene and neurons. This association is kept in each frame, what constitutes the foundations of this tracking system. Experiments show that our method is capable to accurately track objects in real-world video sequences.
Neurocomputing | 2011
Ezequiel López-Rubio; Esteban José Palomo-Ferrer; Juan Miguel Ortiz-de-Lazcano-Lobato; María del Carmen Vargas-González
Abstract Self-organizing neural networks are usually focused on prototype learning, while the topology is held fixed during the learning process. Here a method to adapt the topology of the network so that it reflects the internal structure of the input distribution is proposed. This leads to a self-organizing graph, where each unit is a mixture component of a mixture of Gaussians (MoG). The corresponding update equations are derived from the stochastic approximation framework. This approach combines the advantages of probabilistic mixtures with those of self-organization. Experimental results are presented to show the self-organization ability of our proposal and its performance when used with multivariate datasets in classification and image segmentation tasks.
international conference on artificial neural networks | 2006
Domingo López-Rodríguez; Enrique Mérida-Casermeiro; Juan Miguel Ortiz-de-Lazcano-Lobato; Ezequiel López-Rubio
In this work we propose a recurrent multivalued network, generalizing Hopfields model, which can be interpreted as a vector quantifier. We explain the model and establish a relation between vector quantization and sum-of-squares clustering. To test the efficiency of this model as vector quantifier, we apply this new technique to image compression. Two well-known images are used as benchmark, allowing us to compare our model to standard competitive learning. In our simulations, our new technique clearly outperforms the classical algorithm for vector quantization, achieving not only a better distortion rate, but even reducing drastically the computational time.
international conference on artificial neural networks | 2007
Domingo López-Rodríguez; Enrique Mérida-Casermeiro; Juan Miguel Ortiz-de-Lazcano-Lobato; Gloria Galán-Marín
In this paper, the K-pages graph layout problem is solved by a new neural model. This model consists of two neural networks performing jointly in order to minimize the same energy function. The neural technique applied to this problem allows to reduce the energy function by changing outputs from both networks -outputs of first network representing location of nodes in the nodes line, while the outputs of the second one meaning the page where the edges are drawn. A detailed description of the model is presented, and the technique to minimize an energy function is fully described. It has proved to be a very competitive and efficient algorithm, in terms of quality of solutions and computational time, when compared to the state-of-the-art heuristic methods specifically designed for this problem. Some simulation results are presented in this paper, to show the comparative efficiency of the methods.
international conference on adaptive and natural computing algorithms | 2007
Gloria Galán-Marín; Enrique Mérida-Casermeiro; Domingo López-Rodríguez; Juan Miguel Ortiz-de-Lazcano-Lobato
The map-coloring problem is a well known combinatorial optimization problem which frequently appears in mathematics, graph theory and artificial intelligence. This paper presents a study into the performance of some binary Hopfield networks with discrete dynamics for this classic problem. A number of instances have been simulated to demonstrate that only the proposed binary model provides optimal solutions. In addition, for large-scale maps an algorithm is presented to improve the local minima of the network by solving gradually growing submaps of the considered map. Simulation results for several n-region 4-color maps showed that the proposed neural algorithm converged to a correct colouring from at least 90% of initial states without the fine-tuning of parameters required in another Hopfield models.
Expert Systems With Applications | 2011
David A. Elizondo; Juan Miguel Ortiz-de-Lazcano-Lobato; Ralph Birkenhead
Several algorithms exist for testing linear separability. The choice of a particular testing algorithm has effects on the performance of constructive neural network algorithms that are based on the transformation of a nonlinear separability classification problem into a linearly separable one. This paper presents an empirical study of these effects in terms of the topology size, the convergence time, and generalisation level of the neural networks. Six different methods for testing linear separability were used in this study. Four out of the six methods are exact methods and the remaining two are approximative ones. A total of nine machine learning benchmarks were used for this study.