Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kristian Kersting is active.

Publication


Featured researches published by Kristian Kersting.


international conference on machine learning | 2007

Most likely heteroscedastic Gaussian process regression

Kristian Kersting; Christian Plagemann; Patrick Pfaff; Wolfram Burgard

This paper presents a novel Gaussian process (GP) approach to regression with input-dependent noise rates. We follow Goldberg et al.s approach and model the noise variance using a second GP in addition to the GP governing the noise-free output value. In contrast to Goldberg et al., however, we do not use a Markov chain Monte Carlo method to approximate the posterior noise variance but a most likely noise approach. The resulting model is easy to implement and can directly be used in combination with various existing extensions of the standard GPs such as sparse approximations. Extensive experiments on both synthetic and real-world data, including a challenging perception problem in robotics, show the effectiveness of most likely heteroscedastic GP regression.


knowledge discovery and data mining | 2003

Probabilistic logic learning

Luc De Raedt; Kristian Kersting

The past few years have witnessed an significant interest in probabilistic logic learning, i.e. in research lying at the intersection of probabilistic reasoning, logical representations, and machine learning. A rich variety of different formalisms and learning techniques have been developed. This paper provides an introductory survey and overview of the state-of-the-art in probabilistic logic learning through the identification of a number of important probabilistic, logical and learning concepts.


inductive logic programming | 2001

Towards Combining Inductive Logic Programming with Bayesian Networks

Kristian Kersting; Luc De Raedt

Recently, new representation languages that integrate first order logic with Bayesian networks have been developed. Bayesian logic programs are one of these languages. In this paper, we present results on combining Inductive Logic Programming (ILP) with Bayesian networks to learn both the qualitative and the quantitative components of Bayesian logic programs. More precisely, we show how to combine the ILP setting learning from interpretations with score-based techniques for learning Bayesian networks. Thus, the paper positively answers Koller and Pfeffers question, whether techniques from ILP could help to learn the logical component of first order probabilistic models.


european conference on artificial intelligence | 2012

Lifted probabilistic inference

Kristian Kersting

Many AI problems arising in a wide variety of fields such as machine learning, semantic web, network communication, computer vision, and robotics can elegantly be encoded and solved using probabilistic graphical models. Often, however, we are facing inference problems with symmetries and redundancies only implicitly captured in the graph structure and, hence, not exploitable by efficient inference approaches. A prominent example are probabilistic logical models that tackle a long standing goal of AI, namely unifying first-order logic — capturing regularities and symmetries — and probability — capturing uncertainty. Although they often encode large, complex models using few rules only and, hence, symmetries and redundancies abound, inference in them was originally still at the propositional representation level and did not exploit symmetries. This paper is intended to give a (not necessarily complete) overview and invitation to the emerging field of lifted probabilistic inference, inference techniques that exploit these symmetries in graphical models in order to speed up inference, ultimately orders of magnitude.


inductive logic programming | 2008

Basic principles of learning Bayesian logic programs

Kristian Kersting; Luc De Raedt

Bayesian logic programs tightly integrate definite logic programs with Bayesian networks in order to incorporate the notions of objects and relations into Bayesian networks. They establish a one-to-one mapping between ground atoms and random variables, and between the immediate consequence operator and the directly influenced by relation. In doing so, they nicely separate the qualitative (i.e. logical) component from the quantitative (i.e. the probabilistic) one providing a natural framework to describe general, probabilistic dependencies among sets of random variables. In this chapter, we present results on combining Inductive Logic Programming with Bayesian networks to learn both the qualitative and the quantitative components of Bayesian logic programs from data. More precisely, we show how the qualitative components can be learned by combining the inductive logic programming setting learning from interpretations with score-based techniques for learning Bayesian networks. The estimation of the quantitative components is reduced to the corresponding problem of (dynamic) Bayesian networks.


Machine Learning | 2012

Gradient-based boosting for statistical relational learning: The relational dependency network case

Sriraam Natarajan; Tushar Khot; Kristian Kersting; Bernd Gutmann; Jude W. Shavlik

Dependency networks approximate a joint probability distribution over multiple random variables as a product of conditional distributions. Relational Dependency Networks (RDNs) are graphical models that extend dependency networks to relational domains. This higher expressivity, however, comes at the expense of a more complex model-selection problem: an unbounded number of relational abstraction levels might need to be explored. Whereas current learning approaches for RDNs learn a single probability tree per random variable, we propose to turn the problem into a series of relational function-approximation problems using gradient-based boosting. In doing so, one can easily induce highly complex features over several iterations and in turn estimate quickly a very expressive model. Our experimental results in several different data sets show that this boosting method results in efficient learning of RDNs when compared to state-of-the-art statistical relational learning approaches.


Journal of Artificial Intelligence Research | 2006

Logical hidden Markov models

Kristian Kersting; Luc De Raedt; Tapani Raiko

Logical hidden Markov models (LOHMMs) upgrade traditional hidden Markov models to deal with sequences of structured symbols in the form of logical atoms, rather than flat characters. This note formally introduces LOHMMs and presents solutions to the three central inference problems for LOHMMs: evaluation, most likely hidden state sequence and parameter estimation. The resulting representation and algorithms are experimentally evaluated on problems from the domain of bioinformatics.


inductive logic programming | 2008

Probabilistic inductive logic programming

Luc De Raedt; Kristian Kersting

Probabilistic inductive logic programming aka. statistical relational learning addresses one of the central questions of artificial intelligence: the integration of probabilistic reasoning with machine learning and first order and relational logic representations. A rich variety of different formalisms and learning techniques have been developed. A unifying characterization of the underlying learning settings, however, is missing so far. In this chapter, we start from inductive logic programming and sketch how the inductive logic programming formalisms, settings and techniques can be extended to the statistical case. More precisely, we outline three classical settings for inductive logic programming, namely learning from entailment, learning from interpretations, and learning from proofs or traces, and show how they can be adapted to cover state-of-the-art statistical relational learning approaches.


Encyclopedia of Machine Learning | 2014

Statistical Relational Learning

Sriraam Natarajan; Kristian Kersting; Tushar Khot; Jude W. Shavlik

This chapter presents background on SRL models on which our work is based on. We start with a brief technical background on first-order logic and graphical models. In Sect. 2.2, we present an overview of SRL models followed by details on two popular SRL models. We then present the learning challenges in these models and the approaches taken to solve them in literature. In Sect. 2.3.3, we present functional-gradient boosting, an ensemble approach, which forms the basis of our learning approaches. Finally, We present details about the evaluation metrics and datasets we used.


Functional Plant Biology | 2012

Early drought stress detection in cereals: simplex volume maximisation for hyperspectral image analysis

Christoph Römer; Mirwaes Wahabzada; Agim Ballvora; Francisco Pinto; Micol Rossini; Jan Behmann; Jens Léon; Christian Thurau; Christian Bauckhage; Kristian Kersting; Uwe Rascher; Lutz Plümer

Early water stress recognition is of great relevance in precision plant breeding and production. Hyperspectral imaging sensors can be a valuable tool for early stress detection with high spatio-temporal resolution. They gather large, high dimensional data cubes posing a significant challenge to data analysis. Classical supervised learning algorithms often fail in applied plant sciences due to their need of labelled datasets, which are difficult to obtain. Therefore, new approaches for unsupervised learning of relevant patterns are needed. We apply for the first time a recent matrix factorisation technique, simplex volume maximisation (SiVM), to hyperspectral data. It is an unsupervised classification approach, optimised for fast computation of massive datasets. It allows calculation of how similar each spectrum is to observed typical spectra. This provides the means to express how likely it is that one plant is suffering from stress. The method was tested for drought stress, applied to potted barley plants in a controlled rain-out shelter experiment and to agricultural corn plots subjected to a two factorial field setup altering water and nutrient availability. Both experiments were conducted on the canopy level. SiVM was significantly better than using a combination of established vegetation indices. In the corn plots, SiVM clearly separated the different treatments, even though the effects on leaf and canopy traits were subtle.

Collaboration


Dive into the Kristian Kersting's collaboration.

Top Co-Authors

Avatar

Sriraam Natarajan

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Luc De Raedt

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tushar Khot

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Martin Mladenov

Technical University of Dortmund

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jude W. Shavlik

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Fabian Hadiji

Technical University of Dortmund

View shared research outputs
Researchain Logo
Decentralizing Knowledge