Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Deepak Venugopal is active.

Publication


Featured researches published by Deepak Venugopal.


empirical methods in natural language processing | 2014

Relieving the Computational Bottleneck: Joint Inference for Event Extraction with High-Dimensional Features

Deepak Venugopal; Chen Chen; Vibhav Gogate; Vincent Ng

Several state-of-the-art event extraction systems employ models based on Support Vector Machines (SVMs) in a pipeline architecture, which fails to exploit the joint dependencies that typically exist among events and arguments. While there have been attempts to overcome this limitation using Markov Logic Networks (MLNs), it remains challenging to perform joint inference in MLNs when the model encodes many high-dimensional sophisticated features such as those essential for event extraction. In this paper, we propose a new model for event extraction that combines the power of MLNs and SVMs, dwarfing their limitations. The key idea is to reliably learn and process high-dimensional features using SVMs; encode the output of SVMs as low-dimensional, soft formulas in MLNs; and use the superior joint inferencing power of MLNs to enforce joint consistency constraints over the soft formulas. We evaluate our approach for the task of extracting biomedical events on the BioNLP 2013, 2011 and 2009 Genia shared task datasets. Our approach yields the best F1 score to date on the BioNLP’13 (53.61) and BioNLP’11 (58.07) datasets and the second-best F1 score to date on the BioNLP’09 dataset (58.16).


Journal of Management Information Systems | 2018

Detecting Review Manipulation on Online Platforms with Hierarchical Supervised Learning

Naveen Kumar; Deepak Venugopal; Liangfei Qiu; Subodha Kumar

Abstract Opinion spammers exploit consumer trust by posting false or deceptive reviews that may have a negative impact on both consumers and businesses. These dishonest posts are difficult to detect because of complex interactions between several user characteristics, such as review velocity, volume, and variety. We propose a novel hierarchical supervised-learning approach to increase the likelihood of detecting anomalies by analyzing several user features and then characterizing their collective behavior in a unified manner. Specifically, we model user characteristics and interactions among them as univariate and multivariate distributions. We then stack these distributions using several supervised-learning techniques, such as logistic regression, support vector machine, and k-nearest neighbors yielding robust meta-classifiers. We perform a detailed evaluation of methods and then develop empirical insights. This approach is of interest to online business platforms because it can help reduce false reviews and increase consumer confidence in the credibility of their online information. Our study contributes to the literature by incorporating distributional aspects of features in machine-learning techniques, which can improve the performance of fake reviewer detection on digital platforms.


international symposium on neural networks | 2017

Adaptive blocked Gibbs sampling for inference in probabilistic graphical models

Mohammad Maminur Islam; Mohammad Khan Al Farabi; Deepak Venugopal

Inference is a central problem in probabilistic graphical models, and is often the main sub-step in probabilistic learning procedures. Thus, accurate inference algorithms are essential to both answer queries on a learned model, as well as to learn a robust model. Gibbs sampling is arguably one of the most popular approximate inference methods that has been widely used for probabilistic inference in several different domains including natural language processing, computer vision. etc. Here, we develop an approach that improves the performance of blocked Gibbs sampling, an advanced variant of the Gibbs sampling algorithm. Specifically, we utilize correlation among variables in the probabilistic graphical model to develop an adaptive blocked Gibbs sampler that automatically tunes its proposal distribution based on statistics derived from previous samples. Specifically, we adapt the proposal such that we sample blocks containing highly correlated variables more often than the others. This in turn helps improve probability estimates given by the sampler, by selecting hard-to-sample variables more often during the sampling procedure. Further, since adaptation breaks the Markovian property of the sampler, we develop a method to guarantee that our sampler converges to the correct stationary distribution despite being non-Markovian, by diminishing the adaptation of the selection probabilities over time. We evaluate our method with several discrete probabilistic graphical models taken from UAI challenge problems corresponding to different domains, and show that our approach is superior in terms of accuracy as compared to methods that ignore correlation information in the proposal distribution of the sampler.


international joint conference on artificial intelligence | 2017

Efficient Inference for Untied MLNs

Somdeb Sarkhel; Deepak Venugopal; Nicholas Ruozzi; Vibhav Gogate

We address the problem of scaling up localsearch or sampling-based inference in Markov logic networks (MLNs) that have large shared substructures but no (or few) tied weights. Such untied MLNs are ubiquitous in practical applications. However, they have very few symmetries, and as a result lifted inference algorithms–the dominant approach for scaling up inference–perform poorly on them. The key idea in our approach is to reduce the hard, time-consuming sub-task in sampling algorithms, computing the sum of weights of features that satisfy a full assignment, to the problem of computing a set of partition functions of graphical models, each defined over the logical variables in a first-order formula. The importance of this reduction is that when the treewidth of all the graphical models is small, it yields an order of magnitude speedup. When the treewidth is large, we propose an over-symmetric approximation and experimentally demonstrate that it is both fast and accurate.


neural information processing systems | 2012

On Lifting the Gibbs Sampling Algorithm

Deepak Venugopal; Vibhav Gogate


national conference on artificial intelligence | 2012

Advances in lifted importance sampling

Vibhav Gogate; Abhay Kumar Jha; Deepak Venugopal


Journal of Machine Learning Research | 2014

Lifted MAP inference for Markov logic networks

Somdeb Sarkhel; Deepak Venugopal; Parag Singla; Vibhav Gogate


national conference on artificial intelligence | 2014

Evidence-based clustering for scalable inference in Markov logic

Deepak Venugopal; Vibhav Gogate


national conference on artificial intelligence | 2015

Just count the satisfied groundings: scalable local-search and sampling based inference in MLNs

Deepak Venugopal; Somdeb Sarkhel; Vibhav Gogate


neural information processing systems | 2014

Scaling-up Importance Sampling for Markov Logic Networks

Deepak Venugopal; Vibhav Gogate

Collaboration


Dive into the Deepak Venugopal's collaboration.

Top Co-Authors

Avatar

Vibhav Gogate

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar

Somdeb Sarkhel

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar

Parag Singla

Indian Institute of Technology Delhi

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Vincent Ng

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar

Liangfei Qiu

College of Business Administration

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chen Chen

University of Texas at Dallas

View shared research outputs
Researchain Logo
Decentralizing Knowledge