Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sriraam Natarajan is active.

Publication


Featured researches published by Sriraam Natarajan.


Machine Learning | 2012

Gradient-based boosting for statistical relational learning: The relational dependency network case

Sriraam Natarajan; Tushar Khot; Kristian Kersting; Bernd Gutmann; Jude W. Shavlik

Dependency networks approximate a joint probability distribution over multiple random variables as a product of conditional distributions. Relational Dependency Networks (RDNs) are graphical models that extend dependency networks to relational domains. This higher expressivity, however, comes at the expense of a more complex model-selection problem: an unbounded number of relational abstraction levels might need to be explored. Whereas current learning approaches for RDNs learn a single probability tree per random variable, we propose to turn the problem into a series of relational function-approximation problems using gradient-based boosting. In doing so, one can easily induce highly complex features over several iterations and in turn estimate quickly a very expressive model. Our experimental results in several different data sets show that this boosting method results in efficient learning of RDNs when compared to state-of-the-art statistical relational learning approaches.


Encyclopedia of Machine Learning | 2014

Statistical Relational Learning

Sriraam Natarajan; Kristian Kersting; Tushar Khot; Jude W. Shavlik

This chapter presents background on SRL models on which our work is based on. We start with a brief technical background on first-order logic and graphical models. In Sect. 2.2, we present an overview of SRL models followed by details on two popular SRL models. We then present the learning challenges in these models and the approaches taken to solve them in literature. In Sect. 2.3.3, we present functional-gradient boosting, an ensemble approach, which forms the basis of our learning approaches. Finally, We present details about the evaluation metrics and datasets we used.


Machine Learning | 2008

Transfer in variable-reward hierarchical reinforcement learning

Neville Mehta; Sriraam Natarajan; Prasad Tadepalli; Alan Fern

Transfer learning seeks to leverage previously learned tasks to achieve faster learning in a new task. In this paper, we consider transfer learning in the context of related but distinct Reinforcement Learning (RL) problems. In particular, our RL problems are derived from Semi-Markov Decision Processes (SMDPs) that share the same transition dynamics but have different reward functions that are linear in a set of reward features. We formally define the transfer learning problem in the context of RL as learning an efficient algorithm to solve any SMDP drawn from a fixed distribution after experiencing a finite number of them. Furthermore, we introduce an online algorithm to solve this problem, Variable-Reward Reinforcement Learning (VRRL), that compactly stores the optimal value functions for several SMDPs, and uses them to optimally initialize the value function for a new SMDP. We generalize our method to a hierarchical RL setting where the different SMDPs share the same task hierarchy. Our experimental results in a simplified real-time strategy domain show that significant transfer learning occurs in both flat and hierarchical settings. Transfer is especially effective in the hierarchical setting where the overall value functions are decomposed into subtask value functions which are more widely amenable to transfer across different SMDPs.


international conference on machine learning | 2005

Dynamic preferences in multi-criteria reinforcement learning

Sriraam Natarajan; Prasad Tadepalli

The current framework of reinforcement learning is based on maximizing the expected returns based on scalar rewards. But in many real world situations, tradeoffs must be made among multiple objectives. Moreover, the agents preferences between different objectives may vary with time. In this paper, we consider the problem of learning in the presence of time-varying preferences among multiple objectives, using numeric weights to represent their importance. We propose a method that allows us to store a finite number of policies, choose an appropriate policy for any weight vector and improve upon it. The idea is that although there are infinitely many weight vectors, they may be well-covered by a small number of optimal policies. We show this empirically in two domains: a version of the Buridans ass problem and network routing.


international conference on data mining | 2011

Learning Markov Logic Networks via Functional Gradient Boosting

Tushar Khot; Sriraam Natarajan; Kristian Kersting; Jude W. Shavlik

Recent years have seen a surge of interest in Statistical Relational Learning (SRL) models that combine logic with probabilities. One prominent example is Markov Logic Networks (MLNs). While MLNs are indeed highly expressive, this expressiveness comes at a cost. Learning MLNs is a hard problem and therefore has attracted much interest in the SRL community. Current methods for learning MLNs follow a two-step approach: first, perform a search through the space of possible clauses and then learn appropriate weights for these clauses. We propose to take a different approach, namely to learn both the weights and the structure of the MLN simultaneously. Our approach is based on functional gradient boosting where the problem of learning MLNs is turned into a series of relational functional approximation problems. We use two kinds of representations for the gradients: clause-based and tree-based. Our experimental evaluation on several benchmark data sets demonstrates that our new approach can learn MLNs as good or better than those found with state-of-the-art methods, but often in a fraction of the time.


international joint conference on artificial intelligence | 2011

Imitation learning in relational domains: a functional-gradient boosting approach

Sriraam Natarajan; Saket Joshi; Prasad Tadepalli; Kristian Kersting; Jude W. Shavlik

Imitation learning refers to the problem of learning how to behave by observing a teacher in action. We consider imitation learning in relational domains, in which there is a varying number of objects and relations among them. In prior work, simple relational policies are learned by viewing imitation learning as supervised learning of a function from states to actions. For propositional worlds, functional gradient methods have been proved to be beneficial. They are simpler to implement than most existing methods, more efficient, more naturally satisfy common constraints on the cost function, and better represent our prior beliefs about the form of the function. Building on recent generalizations of functional gradient boosting to relational representations, we implement a functional gradient boosting approach to imitation learning in relational domains. In particular, given a set of traces from the human teacher, our system learns a policy in the form of a set of relational regression trees that additively approximate the functional gradients. The use of multiple additive trees combined with relational representation allows for learning more expressive policies than what has been done before. We demonstrate the usefulness of our approach in several different domains.


inductive logic programming | 2008

Logical Hierarchical Hidden Markov Models for Modeling User Activities

Sriraam Natarajan; Hung Hai Bui; Prasad Tadepalli; Kristian Kersting; Weng-Keen Wong

Hidden Markov Models (HMM) have been successfully used in applications such as speech recognition, activity recognition, bioinformatics etc. There have been previous attempts such as Hierarchical HMMs and Abstract HMMs to elegantly extend HMMs at multiple levels of temporal abstraction (for example to represent the users activities). Similarly, there has been previous work such as Logical HMMs on extending HMMs to domains with relational structure. In this work we develop a representation that naturally combines the power of both relational and hierarchical models in the form of Logical Hierarchical Hidden Markov Models (LoHiHMMs). LoHiHMMs inherit the compactness of representation from Logical HMMs and the tractability of inference from Hierarchical HMMs. We outline two inference algorithms: one based on grounding the LoHiHMM to a propositional HMM and the other based on particle filtering adapted for this setting. We present the results of our experiments with the model in two simulated domains.


principles of knowledge representation and reasoning | 2014

Relational logistic regression

Seyed Mehran Kazemi; David Buchman; Kristian Kersting; Sriraam Natarajan; David Poole

Logistic regression is a commonly used representation for aggregators in Bayesian belief networks when a child has multiple parents. In this paper we consider extending logistic regression to relational models, where we want to model varying populations and interactions among parents. In this paper, we first examine the representational problems caused by population variation. We show how these problems arise even in simple cases with a single parametrized parent, and propose a linear relational logistic regression which we show can represent arbitrary linear (in population size) decision thresholds, whereas the traditional logistic regression cannot. Then we examine representing interactions among the parents of a child node, and representing non-linear dependency on population size. We propose a multi-parent relational logistic regression which can represent interactions among parents and arbitrary polynomial decision thresholds. Finally, we show how other well-known aggregators can be represented using this relational logistic regression.


Ai Magazine | 2012

Machine Learning for Personalized Medicine: Predicting Primary Myocardial Infarction from Electronic Health Records

Jeremy C. Weiss; Sriraam Natarajan; Peggy L. Peissig; Catherine A. McCarty; David C. Page

Electronic health records (EHRs) are an emerging relational domain with large potential to improve clinical outcomes. We apply two statistical relational learning (SRL) algorithms to the task of predicting primary myocardial infarction. We show that one SRL algorithm, relational functional gradient boosting, outperforms propositional learners particularly in the medically-relevant high recall region. We observe that both SRL algorithms predict outcomes better than their propositional analogs and suggest how our methods can augment current epidemiological practices.


european conference on machine learning | 2012

Lifted online training of relational models with stochastic gradient methods

Babak Ahmadi; Kristian Kersting; Sriraam Natarajan

Lifted inference approaches have rendered large, previously intractable probabilistic inference problems quickly solvable by employing symmetries to handle whole sets of indistinguishable random variables. Still, in many if not most situations training relational models will not benefit from lifting: symmetries within models easily break since variables become correlated by virtue of depending asymmetrically on evidence. An appealing idea for such situations is to train and recombine local models. This breaks long-range dependencies and allows to exploit lifting within and across the local training tasks. Moreover, it naturally paves the way for online training for relational models. Specifically, we develop the first lifted stochastic gradient optimization method with gain vector adaptation, which processes each lifted piece one after the other. On several datasets, the resulting optimizer converges to the same quality solution over an order of magnitude faster, simply because unlike batch training it starts optimizing long before having seen the entire mega-example even once.

Collaboration


Dive into the Sriraam Natarajan's collaboration.

Top Co-Authors

Avatar

Kristian Kersting

Technical University of Dortmund

View shared research outputs
Top Co-Authors

Avatar

Tushar Khot

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Jude W. Shavlik

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Gautam Kunapuli

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Prasad Tadepalli

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Phillip Odom

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

David C. Page

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Shuo Yang

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alan Fern

Oregon State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge