Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tushar Khot is active.

Publication


Featured researches published by Tushar Khot.


Machine Learning | 2012

Gradient-based boosting for statistical relational learning: The relational dependency network case

Sriraam Natarajan; Tushar Khot; Kristian Kersting; Bernd Gutmann; Jude W. Shavlik

Dependency networks approximate a joint probability distribution over multiple random variables as a product of conditional distributions. Relational Dependency Networks (RDNs) are graphical models that extend dependency networks to relational domains. This higher expressivity, however, comes at the expense of a more complex model-selection problem: an unbounded number of relational abstraction levels might need to be explored. Whereas current learning approaches for RDNs learn a single probability tree per random variable, we propose to turn the problem into a series of relational function-approximation problems using gradient-based boosting. In doing so, one can easily induce highly complex features over several iterations and in turn estimate quickly a very expressive model. Our experimental results in several different data sets show that this boosting method results in efficient learning of RDNs when compared to state-of-the-art statistical relational learning approaches.


Encyclopedia of Machine Learning | 2014

Statistical Relational Learning

Sriraam Natarajan; Kristian Kersting; Tushar Khot; Jude W. Shavlik

This chapter presents background on SRL models on which our work is based on. We start with a brief technical background on first-order logic and graphical models. In Sect. 2.2, we present an overview of SRL models followed by details on two popular SRL models. We then present the learning challenges in these models and the approaches taken to solve them in literature. In Sect. 2.3.3, we present functional-gradient boosting, an ensemble approach, which forms the basis of our learning approaches. Finally, We present details about the evaluation metrics and datasets we used.


international conference on data mining | 2011

Learning Markov Logic Networks via Functional Gradient Boosting

Tushar Khot; Sriraam Natarajan; Kristian Kersting; Jude W. Shavlik

Recent years have seen a surge of interest in Statistical Relational Learning (SRL) models that combine logic with probabilities. One prominent example is Markov Logic Networks (MLNs). While MLNs are indeed highly expressive, this expressiveness comes at a cost. Learning MLNs is a hard problem and therefore has attracted much interest in the SRL community. Current methods for learning MLNs follow a two-step approach: first, perform a search through the space of possible clauses and then learn appropriate weights for these clauses. We propose to take a different approach, namely to learn both the weights and the structure of the MLN simultaneously. Our approach is based on functional gradient boosting where the problem of learning MLNs is turned into a series of relational functional approximation problems. We use two kinds of representations for the gradients: clause-based and tree-based. Our experimental evaluation on several benchmark data sets demonstrates that our new approach can learn MLNs as good or better than those found with state-of-the-art methods, but often in a fraction of the time.


empirical methods in natural language processing | 2015

Exploring Markov Logic Networks for Question Answering

Tushar Khot; Niranjan Balasubramanian; Eric Gribkoff; Ashish Sabharwal; Peter Clark; Oren Etzioni

Elementary-level science exams pose significant knowledge acquisition and reasoning challenges for automatic question answering. We develop a system that reasons with knowledge derived from textbooks, represented in a subset of firstorder logic. Automatic extraction, while scalable, often results in knowledge that is incomplete and noisy, motivating use of reasoning mechanisms that handle uncertainty. Markov Logic Networks (MLNs) seem a natural model for expressing such knowledge, but the exact way of leveraging MLNs is by no means obvious. We investigate three ways of applying MLNs to our task. First, we simply use the extracted science rules directly as MLN clauses and exploit the structure present in hard constraints to improve tractability. Second, we interpret science rules as describing prototypical entities, resulting in a drastically simplified but brittle network. Our third approach, called Praline, uses MLNs to align lexical elements as well as define and control how inference should be performed in this task. Praline demonstrates a 15% accuracy boost and a 10x reduction in runtime as compared to other MLNbased methods, and comparable accuracy to word-based baseline approaches.


meeting of the association for computational linguistics | 2017

Answering Complex Questions Using Open Information Extraction.

Tushar Khot; Ashish Sabharwal; Peter Clark

While there has been substantial progress in factoid question-answering (QA), answering complex questions remains challenging, typically requiring both a large body of knowledge and inference techniques. Open Information Extraction (Open IE) provides a way to generate semi-structured knowledge for QA, but to date such knowledge has only been used to answer simple questions with retrieval-based methods. We overcome this limitation by presenting a method for reasoning with Open IE knowledge, allowing more complex questions to be handled. Using a recently proposed support graph optimization framework for QA, we develop a new inference model for Open IE, in particular one that can work effectively with multiple short facts, noise, and the relational structure of tuples. Our model significantly outperforms a state-of-the-art structured solver on complex questions of varying difficulty, while also removing the reliance on manually curated knowledge.


CALC '09 Proceedings of the Workshop on Computational Approaches to Linguistic Creativity | 2009

How creative is your writing? A linguistic creativity measure from computer science and cognitive psychology perspectives

Xiaojin Zhu; Zhiting Xu; Tushar Khot

We demonstrate that subjective creativity in sentence-writing can in part be predicted using computable quantities studied in Computer Science and Cognitive Psychology. We introduce a task in which a writer is asked to compose a sentence given a keyword. The sentence is then assigned a subjective creativity score by human judges. We build a linear regression model which, given the keyword and the sentence, predicts the creativity score. The model employs features on statistical language models from a large corpus, psychological word norms, and WordNet.


international conference on multimedia and expo | 2009

Some new directions in graph-based semi-supervised learning

Xiaojin Zhu; Andrew B. Goldberg; Tushar Khot

In this position paper, we first review the state-of-the-art in graph-based semi-supervised learning, and point out three limitations that are particularly relevant to multimedia analysis: (1) rich data is restricted to live on a single manifold; (2) learning must happen in batch mode; and (3) the target label is assumed smooth on the manifold. We then discuss new directions in semi-supervised learning research that can potentially overcome these limitations: (i) modeling data as a mixture of multiple manifolds that may intersect or overlap; (ii) online semi-supervised learning that learns incrementally with low computation and memory needs; and (iii) learning spectrally sparse but non-smooth labels with compressive sensing. We give concrete examples in each new direction. We hope this article will inspire new research that makes semi-supervised learning an even more valuable tool for multimedia analysis.


international conference on data mining | 2014

Learning from Imbalanced Data in Relational Domains: A Soft Margin Approach

Shuo Yang; Tushar Khot; Kristian Kersting; Gautam Kunapuli; Kris K. Hauser; Sriraam Natarajan

We consider the problem of learning probabilistic models from relational data. One of the key issues with relational data is class imbalance where the number of negative examples far outnumbers the number of positive examples. The common approach for dealing with this problem is the use of sub-sampling of negative examples. We, on the other hand, consider a soft margin approach that explicitly trades off between the false positives and false negatives. We apply this approach to the recently successful formalism of relational functional gradient boosting. Specifically, we modify the objective function of the learning problem to explicitly include the trade-off between false positives and negatives. We show empirically that this approach is more successful in handling the class imbalance problem than the original framework that weighed all the examples equally.


artificial intelligence in medicine in europe | 2015

Extracting Adverse Drug Events from Text Using Human Advice

Phillip Odom; Vishal Bangera; Tushar Khot; David C. Page; Sriraam Natarajan

Adverse drug events (ADEs) are a major concern and point of emphasis for the medical profession, government, and society in general. When methods extract ADEs from observational data, there is a necessity to evaluate these methods. More precisely, it is important to know what is already known in the literature. Consequently, we employ a novel relation extraction technique based on a recently developed probabilistic logic learning algorithm that exploits human advice. We demonstrate on a standard adverse drug events data base that the proposed approach can successfully extract existing adverse drug events from limited amount of training data and compares favorably with state-of-the-art probabilistic logic learning methods.


Machine Learning | 2015

Gradient-based boosting for statistical relational learning: the Markov logic network and missing data cases

Tushar Khot; Sriraam Natarajan; Kristian Kersting; Jude W. Shavlik

Recent years have seen a surge of interest in Statistical Relational Learning (SRL) models that combine logic with probabilities. One prominent and highly expressive SRL model is Markov Logic Networks (MLNs), but the expressivity comes at the cost of learning complexity. Most of the current methods for learning MLN structure follow a two-step approach where first they search through the space of possible clauses (i.e. structures) and then learn weights via gradient descent for these clauses. We present a functional-gradient boosting algorithm to learn both the weights (in closed form) and the structure of the MLN simultaneously. Moreover most of the learning approaches for SRL apply the closed-world assumption, i.e., whatever is not observed is assumed to be false in the world. We attempt to open this assumption. We extend our algorithm for MLN structure learning to handle missing data by using an EM-based approach and show this algorithm can also be used to learn Relational Dependency Networks and relational policies. Our results in many domains demonstrate that our approach can effectively learn MLNs even in the presence of missing data.

Collaboration


Dive into the Tushar Khot's collaboration.

Top Co-Authors

Avatar

Sriraam Natarajan

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Kristian Kersting

Technical University of Dortmund

View shared research outputs
Top Co-Authors

Avatar

Jude W. Shavlik

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Oren Etzioni

University of Washington

View shared research outputs
Top Co-Authors

Avatar

Phillip Odom

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gautam Kunapuli

University of Wisconsin-Madison

View shared research outputs
Researchain Logo
Decentralizing Knowledge