Ingo Thon
Katholieke Universiteit Leuven
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ingo Thon.
Theory and Practice of Logic Programming | 2015
Daan Fierens; Guy Van den Broeck; Joris Renkens; Dimitar Sht. Shterionov; Bernd Gutmann; Ingo Thon; Gerda Janssens; Luc De Raedt
Probabilistic logic programs are logic programs in which some of the facts are annotated with probabilities. This paper investigates how classical inference and learning tasks known from the graphical model community can be tackled for probabilistic logic programs. Several such tasks such as computing the marginals given evidence and learning from (partial) interpretations have not really been addressed for probabilistic logic programs before. The rst contribution of this paper is a suite of ecient algorithms for various inference tasks. It is based on a conversion of the program and the queries and evidence to a weighted Boolean formula. This allows us to reduce the inference tasks to well-studied tasks such as weighted model counting, which can be solved using state-of-the-art methods known from the graphical model and knowledge compilation literature. The second contribution is an algorithm for parameter estimation in the learning from interpretations setting. The algorithm employs Expectation Maximization, and is built on top of the developed inference algorithms. The proposed approach is experimentally evaluated. The results show that the inference algorithms improve upon the state-of-the-art in probabilistic logic programming and that it is indeed possible to learn the parameters of a probabilistic logic program from interpretations.
european conference on machine learning | 2011
Bernd Gutmann; Ingo Thon; Luc De Raedt
ProbLog is a recently introduced probabilistic extension of the logic programming language Prolog, in which facts can be annotated with the probability that they hold. The advantage of this probabilistic language is that it naturally expresses a generative process over interpretations using a declarative model. Interpretations are relational descriptions or possible worlds. This paper introduces a novel parameter estimation algorithm LFI-ProbLog for learning ProbLog programs from partial interpretations. The algorithm is essentially a Soft-EM algorithm. It constructs a propositional logic formula for each interpretation that is used to estimate the marginals of the probabilistic parameters. The LFI-ProbLog algorithm has been experimentally evaluated on a number of data sets that justifies the approach and shows its effectiveness
Theory and Practice of Logic Programming | 2011
Bernd Gutmann; Ingo Thon; Angelika Kimmig; Maurice Bruynooghe; Luc De Raedt
Today, many different probabilistic programming languages exist and even more inference mechanisms for these languages. Still, most logic programming based languages use backward reasoning based on SLD resolution for inference. While these methods are typically computationally efficient, they often can neither handle infinite and/or continuous distributions, nor evidence. To overcome these limitations, we introduce distributional clauses, a variation and extension of Satos distribution semantics. We also contribute a novel approximate inference method that integrates forward reasoning with importance sampling, a well-known technique for probabilistic inference. To achieve efficiency, we integrate two logic programming techniques to direct forward sampling. Magic sets are used to focus on relevant parts of the program, while the integration of backward reasoning allows one to identify and avoid regions of the sample space that are inconsistent with the evidence.
inductive logic programming | 2010
Luc De Raedt; Ingo Thon
Traditionally, rule learners have learned deterministic rules from deterministic data, that is, the rules have been expressed as logical statements and also the examples and their classification have been purely logical. We upgrade rule learning to a probabilistic setting, in which both the examples themselves as well as their classification can be probabilistic. The setting is incorporated in the probabilistic rule learner ProbFOIL, which combines the principles of the relational rule learner FOIL with the probabilistic Prolog, ProbLog. We report also on some experiments that demonstrate the utility of the approach.
Machine Learning | 2011
Ingo Thon; Niels Landwehr; Luc De Raedt
One of the goals of artificial intelligence is to develop agents that learn and act in complex environments. Realistic environments typically feature a variable number of objects, relations amongst them, and non-deterministic transition behavior. While standard probabilistic sequence models provide efficient inference and learning techniques for sequential data, they typically cannot fully capture the relational complexity. On the other hand, statistical relational learning techniques are often too inefficient to cope with complex sequential data. In this paper, we introduce a simple model that occupies an intermediate position in this expressiveness/efficiency trade-off. It is based on CP-logic (Causal Probabilistic Logic), an expressive probabilistic logic for modeling causality. However, by specializing CP-logic to represent a probability distribution over sequences of relational state descriptions and employing a Markov assumption, inference and learning become more tractable and effective. Specifically, we show how to solve part of the inference and learning problems directly at the first-order level, while transforming the remaining part into the problem of computing all satisfying assignments for a Boolean formula in a binary decision diagram.We experimentally validate that the resulting technique is able to handle probabilistic relational domains with a substantial number of objects and relations.
european conference on machine learning | 2008
Ingo Thon; Niels Landwehr; Luc De Raedt
Artificial intelligence aims at developing agents that learn and act in complex environments. Realistic environments typically feature a variable number of objects, relations amongst them, and non-deterministic transition behavior. Standard probabilistic sequence models provide efficient inference and learning techniques, but typically cannot fully capture the relational complexity. On the other hand, statistical relational learning techniques are often too inefficient. In this paper, we present a simple model that occupies an intermediate position in this expressiveness/efficiency trade-off. It is based on CP-logic, an expressive probabilistic logic for modeling causality. However, by specializing CP-logic to represent a probability distribution over sequences of relational state descriptions, and employing a Markov assumption, inference and learning become more tractable and effective. We show that the resulting model is able to handle probabilistic relational domains with a substantial number of objects and relations.
inductive logic programming | 2009
Ingo Thon
One of the current challenges in artificial intelligence is modeling dynamic environments that change due to the actions or activities undertaken by people or agents. The task of inferring hidden states, e.g. the activities or intentions of people, based on observations is called filtering. Standard probabilistic models such as Dynamic Bayesian Networks are able to solve this task efficiently using approximative methods such as particle filters. However, these models do not support logical or relational representations. The key contribution of this paper is the upgrade of a particle filter algorithm for use with a probabilistic logical representation through the definition of a proposal distribution. The performance of the algorithm depends largely on how well this distribution fits the target distribution. We adopt the idea of logical compilation into Binary Decision Diagrams for sampling. This allows us to use the optimal proposal distribution which is normally prohibitively slow.
european conference on symbolic and quantitative approaches to reasoning and uncertainty | 2013
Bogdan Moldovan; Ingo Thon; Jesse Davis; Luc De Raedt
Probabilistic logic programming languages are powerful formalisms that can model complex problems where it is necessary to represent both structure and uncertainty. Using exact inference methods to compute conditional probabilities in these languages is often intractable so approximate inference techniques are necessary. This paper proposes a Markov Chain Monte Carlo algorithm for estimating conditional probabilities based on sampling from an AND/OR tree for ProbLog, a general-purpose probabilistic logic programming language. We propose a parameterizable proposal distribution that generates the next sample in the Markov chain by probabilistically traversing the AND/OR tree from its root, which holds the evidence, to the leaves. An empirical evaluation on several different applications illustrates the advantages of our algorithm.
uncertainty in artificial intelligence | 2011
Daan Fierens; Guy Van den Broeck; Ingo Thon; Bernd Gutmann; Luc De Raedt
Archive | 2008
Luc De Raedt; Bart Demoen; Daan Fierens; Bernd Gutmann; Gerda Janssens; Angelika Kimmig; Niels Landwehr; Theofrastos Mantadelis; Wannes Meert; Ricardo Rocha; Vítor Santos Costa; Ingo Thon; Joost Vennekens