Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Vibhav Gogate is active.

Publication


Featured researches published by Vibhav Gogate.


Artificial Intelligence | 2011

SampleSearch: Importance sampling in presence of determinism

Vibhav Gogate; Rina Dechter

The paper focuses on developing effective importance sampling algorithms for mixed probabilistic and deterministic graphical models. The use of importance sampling in such graphical models is problematic because it generates many useless zero weight samples which are rejected yielding an inefficient sampling process. To address this rejection problem, we propose the SampleSearch scheme that augments sampling with systematic constraint-based backtracking search. We characterize the bias introduced by the combination of search with sampling, and derive a weighting scheme which yields an unbiased estimate of the desired statistics (e.g., probability of evidence). When computing the weights exactly is too complex, we propose an approximation which has a weaker guarantee of asymptotic unbiasedness. We present results of an extensive empirical evaluation demonstrating that SampleSearch outperforms other schemes in presence of significant amount of determinism.


principles and practice of constraint programming | 2006

A new algorithm for sampling CSP solutions uniformly at random

Vibhav Gogate; Rina Dechter

The paper presents a method for generating solutions of a constraint satisfaction problem (CSP) uniformly at random. Our method relies on expressing the constraint network as a uniform probability distribution over its solutions and then sampling from the distribution using state-of-the-art probabilistic sampling schemes. To speed up the rate at which random solutions are generated, we augment our sampling schemes with pruning techniques used successfully in constraint satisfaction search algorithms such as conflict-directed back-jumping and no-good learning.


Journal of Artificial Intelligence Research | 2010

Join-graph propagation algorithms

Robert Mateescu; Kalev Kask; Vibhav Gogate; Rina Dechter

The paper investigates parameterized approximate message-passing schemes that are based on bounded inference and are inspired by Pearls belief propagation algorithm (BP). We start with the bounded inference mini-clustering algorithm and then move to the iterative scheme called Iterative Join-Graph Propagation (IJGP), that combines both iteration and bounded inference. Algorithm IJGP belongs to the class of Generalized Belief Propagation algorithms, a framework that allowed connections with approximate algorithms from statistical physics and is shown empirically to surpass the performance of mini-clustering and belief propagation, as well as a number of other state-of-the-art algorithms on several classes of networks. We also provide insight into the accuracy of iterative BP and IJGP by relating these algorithms to well known classes of constraint propagation schemes.


european conference on machine learning | 2014

Cutset networks: a simple, tractable, and scalable approach for improving the accuracy of Chow-Liu trees

Tahrima Rahman; Prasanna V. Kothalkar; Vibhav Gogate

In this paper, we present cutset networks, a new tractable probabilistic model for representing multi-dimensional discrete distributions. Cutset networks are rooted OR search trees, in which each OR node represents conditioning of a variable in the model, with tree Bayesian networks (Chow-Liu trees) at the leaves. From an inference point of view, cutset networks model the mechanics of Pearls cutset conditioning algorithm, a popular exact inference method for probabilistic graphical models. We present efficient algorithms, which leverage and adopt vast amount of research on decision tree induction for learning cutset networks from data. We also present an expectation-maximization (EM) algorithm for learning mixtures of cutset networks. Our experiments on a wide variety of benchmark datasets clearly demonstrate that compared to approaches for learning other tractable models such as thin-junction trees, latent tree models, arithmetic circuits and sum-product networks, our approach is significantly more scalable, and provides similar or better accuracy.


empirical methods in natural language processing | 2014

Relieving the Computational Bottleneck: Joint Inference for Event Extraction with High-Dimensional Features

Deepak Venugopal; Chen Chen; Vibhav Gogate; Vincent Ng

Several state-of-the-art event extraction systems employ models based on Support Vector Machines (SVMs) in a pipeline architecture, which fails to exploit the joint dependencies that typically exist among events and arguments. While there have been attempts to overcome this limitation using Markov Logic Networks (MLNs), it remains challenging to perform joint inference in MLNs when the model encodes many high-dimensional sophisticated features such as those essential for event extraction. In this paper, we propose a new model for event extraction that combines the power of MLNs and SVMs, dwarfing their limitations. The key idea is to reliably learn and process high-dimensional features using SVMs; encode the output of SVMs as low-dimensional, soft formulas in MLNs; and use the superior joint inferencing power of MLNs to enforce joint consistency constraints over the soft formulas. We evaluate our approach for the task of extracting biomedical events on the BioNLP 2013, 2011 and 2009 Genia shared task datasets. Our approach yields the best F1 score to date on the BioNLP’13 (53.61) and BioNLP’11 (58.07) datasets and the second-best F1 score to date on the BioNLP’09 dataset (58.16).


principles and practice of constraint programming | 2008

Approximate Solution Sampling (and Counting) on AND/OR Spaces

Vibhav Gogate; Rina Dechter

In this paper, we describe a new algorithm for sampling solutions from a uniform distribution over the solutions of a constraint network. Our new algorithm improves upon the Sampling/Importance Resampling (SIR) component of our previous scheme of SampleSearch-SIR by taking advantage of the decomposition implied by the networks AND/OR search space. We also describe how our new scheme can approximately count and lower bound the number of solutions of a constraint network. We demonstrate both theoretically and empirically that our new algorithm yields far better performance than competing approaches.


Intelligenza Artificiale | 2011

Sampling-based lower bounds for counting queries

Vibhav Gogate; Rina Dechter

It is well known that the problem of computing relative approximations of weighted counting queries such as the probability of evidence in a Bayesian network, the partition function of a Markov network, and the number of solutions of a constraint satisfaction problem is NP-hard. In this paper, we settle therefore on an easier problem of computing high-confidence lower bounds. We propose to use importance sampling and Markov inequality for solving it. However, a straight-forward application of the Markov inequality often yields poor lower bounds. We therefore propose several new schemes for improving its performance in practice. Empirically, we show that our new schemes are quite powerful, often yielding substantially higher (better) lower bounds than all state-of-the-art schemes.


Artificial Intelligence | 2012

Importance sampling-based estimation over AND/OR search spaces for graphical models

Vibhav Gogate; Rina Dechter

It is well known that the accuracy of importance sampling can be improved by reducing the variance of its sample mean and therefore variance reduction schemes have been the subject of much research. In this paper, we introduce a family of variance reduction schemes that generalize the sample mean from the conventional OR search space to the AND/OR search space for graphical models. The new AND/OR sample means allow trading time and space with variance. At one end is the AND/OR sample tree mean which has the same time and space complexity as the conventional OR sample tree mean but has smaller variance. At other end is the AND/OR sample graph mean which requires more time and space to compute but has the smallest variance. Theoretically, we show that the variance is smaller in the AND/OR space because the AND/OR sample mean is defined over a larger virtual sample size compared with the OR sample mean. Empirically, we demonstrate that the AND/OR sample mean is far closer to the true mean than the OR sample mean.


Communications of The ACM | 2016

Probabilistic theorem proving

Vibhav Gogate; Pedro M. Domingos

Many representation schemes combining first-order logic and probability have been proposed in recent years. Progress in unifying logical and probabilistic inference has been slower. Existing methods are mainly variants of lifted variable elimination and belief propagation, neither of which take logical structure into account. We propose the first method that has the full power of both graphical model inference and first-order theorem proving (in finite domains with Herbrand interpretations). We first define probabilistic theorem proving (PTP), their generalization, as the problem of computing the probability of a logical formula given the probabilities or weights of a set of formulas. We then show how PTP can be reduced to the problem of lifted weighted model counting, and develop an efficient algorithm for the latter. We prove the correctness of this algorithm, investigate its properties, and show how it generalizes previous approaches. Experiments show that it greatly outperforms lifted variable elimination when logical structure is present. Finally, we propose an algorithm for approximate PTP, and show that it is superior to lifted belief propagation.


international joint conference on artificial intelligence | 2018

Algorithms for the Nearest Assignment Problem

Sara Rouhani; Tahrima Rahman; Vibhav Gogate

We consider the following nearest assignment problem (NAP): given a Bayesian network B and probability value q, find a configuration ω of variables in B such that the difference between q and probability of ω is minimized. NAP is much harder than conventional inference problems such as finding the most probable explanation in that it is NP-hard even on independent Bayesian networks (IBNs), which are networks having no edges. We propose a two-way number partitioning encoding of NAP on IBNs and then leverage poly-time approximation algorithms from the number partitioning literature to develop algorithms with guarantees for solving NAP. We extend our basic algorithm from IBNs to arbitrary probabilistic graphical models by leveraging cutset-based conditioning, local search and (Rao-Blackwellised) sampling algorithms. We derive approximation and complexity guarantees for our new algorithms and show experimentally that they are quite accurate in practice.

Collaboration


Dive into the Vibhav Gogate's collaboration.

Top Co-Authors

Avatar

Rina Dechter

University of California

View shared research outputs
Top Co-Authors

Avatar

Deepak Venugopal

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar

Somdeb Sarkhel

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar

Parag Singla

Indian Institute of Technology Delhi

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David B. Smith

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tahrima Rahman

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kalev Kask

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge