Manuel Gómez-Olmedo
University of Granada
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Manuel Gómez-Olmedo.
Journal of intelligent systems | 2015
Dag Sonntag; Jose M. Peña; Manuel Gómez-Olmedo
We apply Markov chain Monte Carlo (MCMC) sampling to approximately calculate some quantities, and discuss their implications for learning directed and acyclic graphs (DAGs) from data. Specifically, we calculate the approximate ratio of essential graphs (EGs) to DAGs for up to 31 nodes. Our ratios suggest that the average Markov equivalence class is small. We show that a large majority of the classes seem to have a size that is close to the average size. This suggests that one should not expect more than a moderate gain in efficiency when searching the space of EGs instead of the space of DAGs. We also calculate the approximate ratio of connected EGs to connected DAGs, of connected EGs to EGs, and of connected DAGs to DAGs. These new ratios are interesting because, as we will see, the DAG or EG learnt from some given data is likely to be connected. Furthermore, we prove that the latter ratio is asymptotically 1. Finally, we calculate the approximate ratio of EGs to largest chain graphs for up to 25 nodes. Our ratios suggest that Lauritzen–Wermuth–Frydenberg chain graphs are considerably more expressive than DAGs. We also report similar approximate ratios and conclusions for multivariate regression chain graphs.
CAEPIA'09 Proceedings of the Current topics in artificial intelligence, and 13th conference on Spanish association for artificial intelligence | 2009
Andrés Cano; Manuel Gómez-Olmedo; Serafín Moral; Cora B. Pérez-Ariza
This paper proposes a new data structure for representing potentials. Recursive probability trees are a generalization of probability trees. Both structures are able to represent context-specific independencies, but the new one is also able to hold a potential in a factorized way. This new structure can represent some kinds of potentials in a more efficient way than probability trees, and it can be the case that only recursive trees are able to represent certain factorizations. Basic operations for inference in Bayesian networks can be directly performed upon recursive probability trees.
International Journal of Approximate Reasoning | 2013
Andrés Cano; Manuel Gómez-Olmedo; Andrés R. Masegosa; Serafín Moral
The marginal likelihood of the data computed using Bayesian score metrics is at the core of score+search methods when learning Bayesian networks from data. However, common formulations of those Bayesian score metrics rely on free parameters which are hard to assess. Recent theoretical and experimental works have also shown that the commonly employed BDe score metric is strongly biased by the particular assignments of its free parameter known as the equivalent sample size. This sensitivity means that poor choices of this parameter lead to inferred BN models whose structure and parameters do not properly represent the distribution generating the data even for large sample sizes. In this paper we argue that the problem is that the BDe metric is based on assumptions about the BN model parameters distribution assumed to generate the data which are too strict and do not hold in real settings. To overcome this issue we introduce here an approach that tries to marginalize the meta-parameter locally, aiming to embrace a wider set of assumptions about these parameters. It is shown experimentally that this approach offers a robust performance, as good as that of the standard BDe metric with an optimum selection of its free parameter and, in consequence, this method prevents the choice of wrong settings for this widely applied Bayesian score metric.
International Journal of Approximate Reasoning | 2012
Andrés Cano; Manuel Gómez-Olmedo; Serafín Moral; Cora B. Pérez-Ariza; Antonio Salmerón
A Recursive Probability Tree (RPT) is a data structure for representing the potentials involved in Probabilistic Graphical Models (PGMs). This structure is developed with the aim of capturing some types of independencies that cannot be represented with previous structures. This capability leads to improvements in memory space and computation time during inference. This paper describes a learning algorithm for building RPTs from probability distributions. The experimental analysis shows the proper behavior of the algorithm: it produces RPTs encoding good approximations of the original probability distributions.
International Journal of Intelligent Systems | 2013
Andrés Cano; Manuel Gómez-Olmedo; Serafín Moral; Cora B. Pérez-Ariza; Antonio Salmerón
Recursive probability trees (RPTs) are a data structure for representing several types of potentials involved in probabilistic graphical models. The RPT structure improves the modeling capabilities of previous structures (like probability trees or conditional probability tables). These capabilities can be exploited to gain savings in memory space and/or computation time during inference. This paper describes the modeling capabilities of RPTs as well as how the basic operations required for making inference on Bayesian networks operate on them. The performance of the inference process with RPTs is examined with some experiments using the variable elimination algorithm.
International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems | 2012
Andrés Cano; Manuel Gómez-Olmedo; Cora B. Pérez-Ariza; Antonio Salmerón
Andr´es Cano, Manuel Gomez-Olmedo, Cora B. P´erez-ArizaDept. of Computer Science and Artificial Intelligence, University of Granada, C/ DanielSaucedo Aranda s/n, 18071 Granada, Spain{acu,mgomez,cora}@decsai.ugr.esAntonio Salmero´nDept. Statistics and Applied Mathematics, University of Almer´ia, La Can˜ada de San Urbanos/n, 04120 Almer´ia, [email protected] (received date)Revised (revised date)We present an efficient procedure for factorising probabilistic potentials represented asprobability trees. This new procedure is able to detect some regularities that cannot becaptured by existing methods. In cases where an exact decomposition is not achievable,we propose a heuristic way to carry out approximate factorisations guided by a parametercalled factorisation degree, which is fast to compute. We show how this parameter can beused to control the tradeoff between complexity and accuracy in approximate inferencealgorithms for Bayesian networks.Keywords: Bayesian networks; Probability trees; Factorisation; Probabilistic inference
european conference on symbolic and quantitative approaches to reasoning and uncertainty | 2009
Andrés Cano; Manuel Gómez-Olmedo; Serafín Moral
The present paper introduces a new kind of representation for the potentials in a Bayesian network: Binary Probability Trees. They allow to represent finer grain context-specific independences than those which can be encoded with probability trees. This enhanced capability leads to more efficient inference algorithms in some types of Bayesian networks. The paper explains how to build a binary tree from a given potential with a similar procedure to the one employed for probability trees. It also offers a way of pruning a binary tree if exact inference cannot be performed with exact trees, and provides detailed algorithms for performing directly with binary trees the basic operations on potentials (restriction, combination and marginalization). Finally, some experiments are shown that use binary trees with the variable elimination algorithm to compare the performance with standard probability trees.
probabilistic graphical models | 2014
Rafael Cabañas; Andrés Cano; Manuel Gómez-Olmedo; Anders L. Madsen
Influence Diagrams are an effective modelling framework for analysis of Bayesian decision making under uncertainty. Improving the performance of the evaluation is an element of crucial importance as real-world decision problems are more and more complex. Lazy Evaluation is an algorithm used to evaluate Influence Diagrams based on message passing in a strong junction tree. This paper proposes the use of Symbolic Probabilistic Inference as an alternative to Variable Elimination for computing the clique-to-clique messages in Lazy Evaluation of Influence Diagrams.
international conference information processing | 2014
Rafael Cabañas; Anders L. Madsen; Andrés Cano; Manuel Gómez-Olmedo
An Influence Diagram is a probabilistic graphical model used to represent and solve decision problems under uncertainty. Its evaluation requires to perform a series of combinations and marginalizations with the potentials attached to the Influence Diagram. Finding an optimal order for these operations, which is NP-hard, is an element of crucial importance for the efficiency of the evaluation. The SPI algorithm considers the evaluation as a combinatorial factorization problem. In this paper, we describe how the principles of SPI can be used to solve Influence Diagrams. We also include an evaluation of different combination selection heuristics and a comparison with the variable elimination algorithm.
intelligent systems design and applications | 2011
Andrés Cano; Manuel Gómez-Olmedo; S. Moral A. Masegosa
Most of learning algorithms with Bayesian networks try to minimize the number of structural errors (missing, added or inverted links in the learned graph with respect to the true one). In this paper we assume that the objective of the learning task is to approximate the joint probability distribution of the data. For this aim, some experiments have shown that learning with probability trees to represent the conditional probability distributions of each node given its parents provides better results that learning with probability tables. When approximating a joint distribution structure and parameter learning can not be seen as separated tasks and we have to evaluate the performance of combinations of procedures for inducing both structure and parameters. We carry out an experimental evaluation of several combined strategies based on trees and tables using a greedy hill climbing algorithm and compare the results with a restricted search procedure (the Max-Min hill climbing algorithm).
Collaboration
Dive into the Manuel Gómez-Olmedo's collaboration.
Dalle Molle Institute for Artificial Intelligence Research
View shared research outputs