Pedro L. López-Cruz
Technical University of Madrid
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Pedro L. López-Cruz.
International Journal of Approximate Reasoning | 2014
Pedro L. López-Cruz; Pedro Larrañaga; Javier DeFelipe; Concha Bielza
Abstract Neuronal morphology is hugely variable across brain regions and species, and their classification strategies are a matter of intense debate in neuroscience. GABAergic cortical interneurons have been a challenge because it is difficult to find a set of morphological properties which clearly define neuronal types. A group of 48 neuroscience experts around the world were asked to classify a set of 320 cortical GABAergic interneurons according to the main features of their three-dimensional morphological reconstructions. A methodology for building a model which captures the opinions of all the experts was proposed. First, one Bayesian network was learned for each expert, and we proposed an algorithm for clustering Bayesian networks corresponding to experts with similar behaviors. Then, a Bayesian network which represents the opinions of each group of experts was induced. Finally, a consensus Bayesian multinet which models the opinions of the whole group of experts was built. A thorough analysis of the consensus model identified different behaviors between the experts when classifying the interneurons in the experiment. A set of characterizing morphological traits for the neuronal types was defined by performing inference in the Bayesian multinet. These findings were used to validate the model and to gain some insights into neuron morphology.
Neuroinformatics | 2011
Pedro L. López-Cruz; Concha Bielza; Pedro Larrañaga; Ruth Benavides-Piccione; Javier DeFelipe
Neuron morphology is crucial for neuronal connectivity and brain information processing. Computational models are important tools for studying dendritic morphology and its role in brain function. We applied a class of probabilistic graphical models called Bayesian networks to generate virtual dendrites from layer III pyramidal neurons from three different regions of the neocortex of the mouse. A set of 41 morphological variables were measured from the 3D reconstructions of real dendrites and their probability distributions used in a machine learning algorithm to induce the model from the data. A simulation algorithm is also proposed to obtain new dendrites by sampling values from Bayesian networks. The main advantage of this approach is that it takes into account and automatically locates the relationships between variables in the data instead of using predefined dependencies. Therefore, the methodology can be applied to any neuronal class while at the same time exploiting class-specific properties. Also, a Bayesian network was defined for each part of the dendrite, allowing the relationships to change in the different sections and to model heterogeneous developmental factors or spatial influences. Several univariate statistical tests and a novel multivariate test based on Kullback–Leibler divergence estimation confirmed that virtual dendrites were similar to real ones. The analyses of the models showed relationships that conform to current neuroanatomical knowledge and support model correctness. At the same time, studying the relationships in the models can help to identify new interactions between variables related to dendritic morphology.
International Journal of Approximate Reasoning | 2014
Pedro L. López-Cruz; Concha Bielza; Pedro Larrañaga
Non-parametric density estimation is an important technique in probabilistic modeling and reasoning with uncertainty. We present a method for learning mixtures of polynomials (MoPs) approximations of one-dimensional and multidimensional probability densities from data. The method is based on basis spline interpolation, where a density is approximated as a linear combination of basis splines. We compute maximum likelihood estimators of the mixing coefficients of the linear combination. The Bayesian information criterion is used as the score function to select the order of the polynomials and the number of pieces of the MoP. The method is evaluated in two ways. First, we test the approximation fitting. We sample artificial datasets from known one-dimensional and multidimensional densities and learn MoP approximations from the datasets. The quality of the approximations is analyzed according to different criteria, and the new proposal is compared with MoPs learned with Lagrange interpolation and mixtures of truncated basis functions. Second, the proposed method is used as a non-parametric density estimation technique in Bayesian classifiers. Two of the most widely studied Bayesian classifiers, i.e., the naive Bayes and tree-augmented naive Bayes classifiers, are implemented and compared. Results on real datasets show that the non-parametric Bayesian classifiers using MoPs are comparable to the kernel density-based Bayesian classifiers. We provide a free R package implementing the proposed methods.
Scientific Reports | 2015
Concha Bielza; Ruth Benavides-Piccione; Pedro L. López-Cruz; Pedro Larrañaga; Javier DeFelipe
Unraveling pyramidal cell structure is crucial to understanding cortical circuit computations. Although it is well known that pyramidal cell branching structure differs in the various cortical areas, the principles that determine the geometric shapes of these cells are not fully understood. Here we analyzed and modeled with a von Mises distribution the branching angles in 3D reconstructed basal dendritic arbors of hundreds of intracellularly injected cortical pyramidal cells in seven different cortical regions of the frontal, parietal, and occipital cortex of the mouse. We found that, despite the differences in the structure of the pyramidal cells in these distinct functional and cytoarchitectonic cortical areas, there are common design principles that govern the geometry of dendritic branching angles of pyramidal cells in all cortical areas.
Pattern Analysis and Applications | 2015
Pedro L. López-Cruz; Concha Bielza; Pedro Larrañaga
Directional data are ubiquitous in science. These data have some special properties that rule out the use of classical statistics. Therefore, different distributions and statistics, such as the univariate von Mises and the multivariate von Mises–Fisher distributions, should be used to deal with this kind of information. We extend the naive Bayes classifier to the case where the conditional probability distributions of the predictive variables follow either of these distributions. We consider the simple scenario, where only directional predictive variables are used, and the hybrid case, where discrete, Gaussian and directional distributions are mixed. The classifier decision functions and their decision surfaces are studied at length. Artificial examples are used to illustrate the behavior of the classifiers. The proposed classifiers are then evaluated over eight datasets, showing competitive performances against other naive Bayes classifiers that use Gaussian distributions or discretization to manage directional data.
International Journal of Intelligent Systems | 2015
Gherardo Varando; Pedro L. López-Cruz; Thomas Dyhre Nielsen; Pedro Larrañaga; Concha Bielza
Mixtures of polynomials (MoPs) are a nonparametric density estimation technique especially designed for hybrid Bayesian networks with continuous and discrete variables. Algorithms to learn one‐ and multidimensional (marginal) MoPs from data have recently been proposed. In this paper, we introduce two methods for learning MoP approximations of conditional densities from data. Both approaches are based on learning MoP approximations of the joint density and the marginal density of the conditioning variables, but they differ as to how the MoP approximation of the quotient of the two densities is found. We illustrate and study the methods using data sampled from known parametric distributions, and demonstrate their applicability by learning models based on real neuroscience data. Finally, we compare the performance of the proposed methods with an approach for learning mixtures of truncated basis functions (MoTBFs). The empirical results show that the proposed methods generally yield models that are comparable to or significantly better than those found using the MoTBF‐based method.
CAEPIA'11 Proceedings of the 14th international conference on Advances in artificial intelligence: spanish association for artificial intelligence | 2011
Pedro L. López-Cruz; Concha Bielza; Pedro Larrañaga
Directional and angular information are to be found in almost every field of science. Directional statistics provides the theoretical background and the techniques for processing such data, which cannot be properly managed by classical statistics. The von Mises distribution is the best known angular distribution. We extend the naive Bayes classifier to the case where directional predictive variables are modeled using von Mises distributions. We find the decision surfaces induced by the classifiers and illustrate their behavior with artificial examples. Two applications to real data are included to show the potential uses of these models. Comparisons with classical techniques yield promising results.
Conference of the Spanish Association for Artificial Intelligence | 2013
Pedro L. López-Cruz; Concha Bielza; Pedro Larrañaga
We study the problem of learning Bayesian classifiers (BC) when the true class label of the training instances is not known, and is substituted by a probability distribution over the class labels for each instance. This scenario can arise, e.g., when a group of experts is asked to individually provide a class label for each instance. We particularize the generalized expectation maximization (GEM) algorithm in [1] to learn BCs with different structural complexities: naive Bayes, averaged one-dependence estimators or general conditional linear Gaussian classifiers. An evaluation conducted on eight datasets shows that BCs learned with GEM perform better than those using either the classical Expectation Maximization algorithm or potentially wrong class labels. BCs achieve similar results to the multivariate Gaussian classifier without having to estimate the full covariance matrices.
15th Conference of the Spanish Association for Artificial Intelligence | 2013
Pedro L. López-Cruz; Thomas Dyhre Nielsen; Concha Bielza; Pedro Larrañaga
Mixtures of polynomials (MoPs) are a non-parametric density estimation technique for hybrid Bayesian networks with continuous and discrete variables. We propose two methods for learning MoP approximations of conditional densities from data. Both approaches are based on learning MoP approximations of the joint density and the marginal density of the conditioning variables, but they differ as to how the MoP approximation of the quotient of the two densities is found. We illustrate the methods using data sampled from a simple Gaussian Bayesian network. We study and compare the performance of these methods with the approach for learning mixtures of truncated basis functions from data.
Nature Reviews Neuroscience | 2013
Javier DeFelipe; Pedro L. López-Cruz; Ruth Benavides-Piccione; Concha Bielza; Pedro Larrañaga; Stewart A. Anderson; Andreas Burkhalter; Bruno Cauli; Alfonso Fairén; Dirk Feldmeyer; Gord Fishell; David Fitzpatrick; Tamás F. Freund; Guillermo Gonzalez-Burgos; Shaul Hestrin; Sean L. Hill; Patrick R. Hof; Josh Z. Huang; Edward G. Jones; Yasuo Kawaguchi; Zoltán F. Kisvárday; Yoshiyuki Kubota; David A. Lewis; Oscar Marín; Henry Markram; Chris J. McBain; Hanno S. Meyer; Hannah Monyer; Sacha B. Nelson; Kathleen S. Rockland