Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Manfred Jaeger is active.

Publication


Featured researches published by Manfred Jaeger.


principles of knowledge representation and reasoning | 1994

Probabilistic Reasoning in Terminological Logics

Manfred Jaeger

Abstract In this paper a probabilistic extensions for terminological knowledge representation languages is defined. Two kinds of probabilistic statements are introduced: statements about conditional probabilities between concepts and statements expressing uncertain knowledge about a specific object. The usual model-theoretic semantics for terminological logics are extended to define interpretations for the resulting probabilistic language. It is our main objective to find an adequate modelling of the way the two kinds of probabilistic knowledge are combined in commonsense inferences of probabilistic statements. Cross entropy minimization is a technique that turns out to be very well suited for achieving this end.


international conference on data mining | 2002

A theory of inductive query answering

L. de Raedt; Manfred Jaeger; Sau Dan Lee; Heikki Mannila

We introduce the Boolean inductive query evaluation problem, which is concerned with answering inductive queries that are arbitrary Boolean expressions over monotonic and anti-monotonic predicates. Secondly, we develop a decomposition theory for inductive query evaluation in which a Boolean query Q is reformulated into k sub-queries Q/sub i/ = Q/sub A/ /spl and/ Q/sub M/ that are the conjunction of a monotonic and an anti-monotonic predicate. The solution to each subquery can be represented using a version space. We investigate how the number of version spaces k needed to answer the query can be minimized. Thirdly, for the pattern domain of strings, we show how the version spaces can be represented using a novel data structure, called the version space tree, and can be computed using a variant of the famous a priori algorithm. Finally, we present experiments that validate the approach.


Annals of Mathematics and Artificial Intelligence | 2001

Complex Probabilistic Modeling with Recursive Relational Bayesian Networks

Manfred Jaeger

A number of representation systems have been proposed that extend the purely propositional Bayesian network paradigm with representation tools for some types of first-order probabilistic dependencies. Examples of such systems are dynamic Bayesian networks and systems for knowledge based model construction. We can identify the representation of probabilistic relational models as a common well-defined semantic core of such systems.Recursive relational Bayesian networks (RRBNs) are a framework for the representation of probabilistic relational models. A main design goal for RRBNs is to achieve greatest possible expressiveness with as few elementary syntactic constructs as possible. The advantage of such an approach is that a system based on a small number of elementary constructs will be much more amenable to a thorough mathematical investigation of its semantic and algorithmic properties than a system based on a larger number of high-level constructs. In this paper we show that with RRBNs we have achieved our goal, by showing, first, how to solve within that framework a number of non-trivial representation problems. In the second part of the paper we show how to construct from a RRBN and a specific query, a standard Bayesian network in which the answer to the query can be computed with standard inference algorithms. Here the simplicity of the underlying representation framework greatly facilitates the development of simple algorithms and correctness proofs. As a result we obtain a construction algorithm that even for RRBNs that represent models for complex first-order and statistical dependencies generates standard Bayesian networks of size polynomial in the size of the domain given in a specific application instance.


International Journal of Approximate Reasoning | 2006

Compiling relational Bayesian networks for exact inference

Mark Chavira; Adnan Darwiche; Manfred Jaeger

We describe in this paper a system for exact inference with relational Bayesian networks as defined in the publicly available Primula tool. The system is based on compiling propositional instances of relational Bayesian networks into arithmetic circuits and then performing online inference by evaluating and differentiating these circuits in time linear in their size. We report on experimental results showing successful compilation and efficient inference on relational Bayesian networks, Whose Primula-generated propositional instances have thousands of variables, and whose jointrees have clusters with hundreds of variables.


probabilistic graphical models | 2004

Probabilistic decision graphs-combining verification and AI techniques for probabilistic inference

Manfred Jaeger

We adopt probabilistic decision graphs developed in the field of automated verification as a tool for probabilistic model representation and inference. We show that probabilistic inference has linear time complexity in the size of the probabilistic decision graph, that the smallest probabilistic decision graph for a given distribution is at most as large as the smallest junction tree for the same distribution, and that in some cases it can in fact be much smaller. Behind these very promising features of probabilistic decision graphs lies the fact that they integrate into a single coherent framework a number of representational and algorithmic optimizations developed for Bayesian networks (use of hidden variables, context-specific independence, structured representation of conditional probability tables).


inductive logic programming | 2010

Extending probLog with continuous distributions

Bernd Gutmann; Manfred Jaeger; Luc De Raedt

ProbLog is a recently introduced probabilistic extension of Prolog. The key contribution of this paper is that we extend ProbLog with abilities to specify continuous distributions and that we show how ProbLogs exact inference mechanism can be modified to cope with such distributions. The resulting inference engine combines an interval calculus with a dynamic discretization algorithm into an effective solver.


international conference on machine learning | 2007

Parameter learning for relational Bayesian networks

Manfred Jaeger

We present a method for parameter learning in relational Bayesian networks (RBNs). Our approach consists of compiling the RBN model into a computation graph for the likelihood function, and to use this likelihood graph to perform the necessary computations for a gradient ascent likelihood optimization procedure. The method can be applied to all RBN models that only contain differentiable combining rules. This includes models with non-decomposable combining rules, as well as models with weighted combinations or nested occurrences of combining rules. Experimental results on artificial random graph data explores the feasibility of the approach both for complete and incomplete data.


Artificial Intelligence | 2000

On the complexity of inference about probabilistic relational models

Manfred Jaeger

Abstract We investigate the complexity of probabilistic inference from knowledge bases that encode probability distributions on finite domain relational structures. Our interest here lies in the complexity in terms of the domain under consideration in a specific application instance. We obtain the result that assuming NETIME ≠ ETIME this problem is not polynomial for reasonably expressive representation systems. The main consequence of this result is that it is unlikely to find inference techniques with a better worst-case behavior than the commonly employed strategy of constructing standard Bayesian networks over ground atoms (knowledge based model construction).


quantitative evaluation of systems | 2011

Learning Probabilistic Automata for Model Checking

Hua Mao; Yingke Chen; Manfred Jaeger; Thomas Dyhre Nielsen; Kim Guldstrand Larsen; Brian Nielsen

Obtaining accurate system models for verification is a hard and time consuming process, which is seen by industry as a hindrance to adopt otherwise powerful model driven development techniques and tools. In this paper we pursue an alternative approach where an accurate high-level model can be automatically constructed from observations of a given black-box embedded system. We adapt algorithms for learning finite probabilistic automata from observed system behaviors. We prove that in the limit of large sample sizes the learned model will be an accurate representation of the data-generating system. In particular, in the large sample limit, the learned model and the original system will define the same probabilities for linear temporal logic (LTL) properties. Thus, we can perform PLTL model-checking on the learned model to infer properties of the system. We perform experiments learning models from system observations at different levels of abstraction. The experimental results show the learned models provide very good approximations for relevant properties of the original system.


Annals of Statistics | 2005

Ignorability for categorical data

Manfred Jaeger

We study the problem of ignorability in likelihood-based inference from incomplete categorical data. Two versions of the coarsened at random assumption (car) are distinguished, their compatibility with the parameter distinctness assumption is investigated and several conditions for ignorability that do not require an extra parameter distinctness assumption are established. It is shown that car assumptions have quite different implications depending on whether the underlying complete-data model is saturated or parametric. In the latter case. car assumptions can become inconsistent with observed data.

Collaboration


Dive into the Manfred Jaeger's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Luc De Raedt

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Sau Dan Lee

University of Freiburg

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge