Guy Van den Broeck
University of California, Los Angeles
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Guy Van den Broeck.
international joint conference on artificial intelligence | 2011
Guy Van den Broeck; Nima Taghipour; Wannes Meert; Jesse Davis; Luc De Raedt
Probabilistic logical languages provide powerful formalisms for knowledge representation and learning. Yet performing inference in these languages is extremely costly, especially if it is done at the propositional level. Lifted inference algorithms, which avoid repeated computation by treating indistinguishable groups of objects as one, help mitigate this cost. Seeking inspiration from logical inference, where lifted inference (e.g., resolution) is commonly performed, we develop a model theoretic approach to probabilistic lifted inference. Our algorithm compiles a first-order probabilistic theory into a first-order deterministic decomposable negation normal form (d-DNNF) circuit. Compilation offers the advantage that inference is polynomial in the size of the circuit. Furthermore, by borrowing techniques from the knowledge compilation literature our algorithm effectively exploits the logical structure (e.g., context-specific independencies) within the first-order model, which allows more computation to be done at the lifted level. An empirical comparison demonstrates the utility of the proposed approach.
Theory and Practice of Logic Programming | 2015
Daan Fierens; Guy Van den Broeck; Joris Renkens; Dimitar Sht. Shterionov; Bernd Gutmann; Ingo Thon; Gerda Janssens; Luc De Raedt
Probabilistic logic programs are logic programs in which some of the facts are annotated with probabilities. This paper investigates how classical inference and learning tasks known from the graphical model community can be tackled for probabilistic logic programs. Several such tasks such as computing the marginals given evidence and learning from (partial) interpretations have not really been addressed for probabilistic logic programs before. The rst contribution of this paper is a suite of ecient algorithms for various inference tasks. It is based on a conversion of the program and the queries and evidence to a weighted Boolean formula. This allows us to reduce the inference tasks to well-studied tasks such as weighted model counting, which can be solved using state-of-the-art methods known from the graphical model and knowledge compilation literature. The second contribution is an algorithm for parameter estimation in the learning from interpretations setting. The algorithm employs Expectation Maximization, and is built on top of the developed inference algorithms. The proposed approach is experimentally evaluated. The results show that the inference algorithms improve upon the state-of-the-art in probabilistic logic programming and that it is indeed possible to learn the parameters of a probabilistic logic program from interpretations.
Journal of Applied Logic | 2017
Angelika Kimmig; Guy Van den Broeck; Luc De Raedt
Weighted model counting (WMC) is a well-known inference task on knowledge bases, and the basis for some of the most efficient techniques for probabilistic inference in graphical models. We introduce algebraic model counting (AMC), a generalization of WMC to a semiring structure that provides a unified view on a range of tasks and existing results. We show that AMC generalizes many well-known tasks in a variety of domains such as probabilistic inference, soft constraints and network and database analysis. Furthermore, we investigate AMC from a knowledge compilation perspective and show that all AMC tasks can be evaluated using sd-DNNF circuits, which are strictly more succinct, and thus more efficient to evaluate, than direct representations of sets of models. We identify further characteristics of AMC instances that allow for evaluation on even more succinct circuits.
Artificial Intelligence | 2016
Jonas Vlasselaer; Wannes Meert; Guy Van den Broeck; Luc De Raedt
We introduce the structural interface algorithm for exact probabilistic inference in Dynamic Bayesian Networks. It unifies state-of-the-art techniques for inference in static and dynamic networks, by combining principles of knowledge compilation with the interface algorithm. The resulting algorithm not only exploits the repeated structure in the network, but also the local structure, including determinism, parameter equality and context-specific independence. Empirically, we show that the structural interface algorithm speeds up inference in the presence of local structure, and scales to larger and more complex networks.
symposium on principles of database systems | 2015
Paul Beame; Guy Van den Broeck; Eric Gribkoff; Dan Suciu
The FO Model Counting problem (FOMC) is the following: given a sentence Φ in FO and a number n, compute the number of models of Φ over a domain of size n; the Weighted variant (WFOMC) generalizes the problem by associating a weight to each tuple and defining the weight of a model to be the product of weights of its tuples. In this paper we study the complexity of the symmetric WFOMC, where all tuples of a given relation have the same weight. Our motivation comes from an important application, inference in Knowledge Bases with soft constraints, like Markov Logic Networks, but the problem is also of independent theoretical interest. We study both the data complexity, and the combined complexity of FOMC and WFOMC. For the data complexity we prove the existence of an FO3 formula for which FOMC is #P1-complete, and the existence of a Conjunctive Query for which WFOMC is #P1-complete. We also prove that all γ-acyclic queries have polynomial time data complexity. For the combined complexity, we prove that, for every fragment FOk, k ≥ 2, the combined complexity of FOMC (or WFOMC) is #P-complete.
Machine Learning | 2016
Jan Van Haaren; Guy Van den Broeck; Wannes Meert; Jesse Davis
Markov logic networks (MLNs) are a well-known statistical relational learning formalism that combines Markov networks with first-order logic. MLNs attach weights to formulas in first-order logic. Learning MLNs from data is a challenging task as it requires searching through the huge space of possible theories. Additionally, evaluating a theory’s likelihood requires learning the weight of all formulas in the theory. This in turn requires performing probabilistic inference, which, in general, is intractable in MLNs. Lifted inference speeds up probabilistic inference by exploiting symmetries in a model. We explore how to use lifted inference when learning MLNs. Specifically, we investigate generative learning where the goal is to maximize the likelihood of the model given the data. First, we provide a generic algorithm for learning maximum likelihood weights that works with any exact lifted inference approach. In contrast, most existing approaches optimize approximate measures such as the pseudo-likelihood. Second, we provide a concrete parameter learning algorithm based on first-order knowledge compilation. Third, we propose a structure learning algorithm that learns liftable MLNs, which is the first MLN structure learning algorithm that exactly optimizes the likelihood of the model. Finally, we perform an empirical evaluation on three real-world datasets. Our parameter learning algorithm results in more accurate models than several competing approximate approaches. It learns more accurate models in terms of test-set log-likelihood as well as prediction tasks. Furthermore, our tractable learner outperforms intractable models on prediction tasks suggesting that liftable models are a powerful hypothesis space, which may be sufficient for many standard learning problems.
inductive logic programming | 2011
Joris Renkens; Guy Van den Broeck; Siegfried Nijssen
ProbLog is a probabilistic extension of Prolog. Given the complexity of exact inference under ProbLogs semantics, in many applications in machine learning approximate inference is necessary. Current approximate inference algorithms for ProbLog however require either dealing with large numbers of proofs or do not guarantee a low approximation error. In this paper we introduce a new approximate inference algorithm which addresses these shortcomings. Given a user-specified parameter k, this algorithm approximates the success probability of a query based on at most k proofs and ensures that the calculated probability p is (1−1/e)p*≤p≤p*, where p* is the highest probability that can be calculated based on any set of k proofs.
Machine Learning | 2012
Joris Renkens; Guy Van den Broeck; Siegfried Nijssen
ProbLog is a probabilistic extension of Prolog. Given the complexity of exact inference under ProbLog’s semantics, in many applications in machine learning approximate inference is necessary. Current approximate inference algorithms for ProbLog however require either dealing with large numbers of proofs or do not guarantee a low approximation error. In this paper we introduce a new approximate inference algorithm which addresses these shortcomings. Given a user-specified parameter k, this algorithm approximates the success probability of a query based on at most k proofs and ensures that the calculated probability p is (1−1/e)p∗≤p≤p∗, where p∗ is the highest probability that can be calculated based on any set of k proofs. Furthermore a useful feature of the set of calculated proofs is that it is diverse. Our experiments show the utility of the proposed algorithm.
european conference on machine learning | 2015
Anton Dries; Angelika Kimmig; Wannes Meert; Joris Renkens; Guy Van den Broeck; Jonas Vlasselaer; Luc De Raedt
We present ProbLog2, the state of the art implementation of the probabilistic programming language ProbLog. The ProbLog language allows the user to intuitively build programs that do not only encode complex interactions between a large sets of heterogenous components but also the inherent uncertainties that are present in real-life situations. The system provides efficient algorithms for querying such models as well as for learning their parameters from data. It is available as an online tool on the web and for download. The offline version offers both command line access to inference and learning and a Python library for building statistical relational learning applications from the systems components.
International Journal of Approximate Reasoning | 2016
Jonas Vlasselaer; Guy Van den Broeck; Angelika Kimmig; Wannes Meert; Luc De Raedt
We propose T P -compilation, a new inference technique for probabilistic logic programs that is based on forward reasoning. T P -compilation proceeds incrementally in that it interleaves the knowledge compilation step for weighted model counting with forward reasoning on the logic program. This leads to a novel anytime algorithm that provides hard bounds on the inferred probabilities. The main difference with existing inference techniques for probabilistic logic programs is that these are a sequence of isolated transformations. Typically, these transformations include conversion of the ground program into an equivalent propositional formula and compilation of this formula into a more tractable target representation for weighted model counting. An empirical evaluation shows that T P -compilation effectively handles larger instances of complex or cyclic real-world problems than current sequential approaches, both for exact and anytime approximate inference. Furthermore, we show that T P -compilation is conducive to inference in dynamic domains as it supports efficient updates to the compiled model. A new inference technique for probabilistic logic programs.We interleave knowledge compilation with forward reasoning.Exact as well as anytime approximate inference.Our approach is conducive to inference in dynamic models.Empirical evaluation on various domains.