Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andrew E. Gelfand is active.

Publication


Featured researches published by Andrew E. Gelfand.


international symposium on information theory | 2013

Belief Propagation for Linear Programming

Andrew E. Gelfand; Jinwoo Shin; Michael Chertkov

Belief Propagation (BP) is a popular, distributed heuristic for performing MAP computations in Graphical Models. BP can be interpreted, from a variational perspective, as minimizing the Bethe Free Energy (BFE). BP can also be used to solve a special class of Linear Programming (LP) problems. For this class of problems, MAP inference can be stated as an integer LP with an LP relaxation that coincides with minimization of the BFE at “zero temperature”. We generalize these prior results and establish a tight characterization of the LP problems that can be formulated as an equivalent LP relaxation of MAP inference. Moreover, we suggest an efficient, iterative annealing BP algorithm for solving this broader class of LP problems. We demonstrate the algorithms performance on a set of weighted matching problems by using it as a cutting plane method to solve a sequence of LPs tightened by adding “blossom” inequalities.


international conference on computer vision | 2011

Integrating local classifiers through nonlinear dynamics on label graphs with an application to image segmentation

Yutian Chen; Andrew E. Gelfand; Charless C. Fowlkes; Max Welling

We present a new method to combine possibly inconsistent locally (piecewise) trained conditional models p(yα∣xα) into pseudo-samples from a global model. Our method does not require training of a CRF, but instead generates samples by iterating forward a weakly chaotic dynamical system. The new method is illustrated on image segmentation tasks where classifiers based on local appearance cues are combined with pairwise boundary cues.


IEEE Transactions on Information Theory | 2018

Maximum Weight Matching Using Odd-Sized Cycles: Max-Product Belief Propagation and Half-Integrality

Sungsoo Ahn; Michael Chertkov; Andrew E. Gelfand; Sejun Park; Jinwoo Shin

We study the maximum weight matching (MWM) problem for general graphs through the max-product belief propagation (BP) and related Linear Programming (LP). The BP approach provides distributed heuristics for finding the maximum a posteriori (MAP) assignment in a joint probability distribution represented by a graphical model (GM), and respective LPs can be considered as continuous relaxations of the discrete MAP problem. It was recently shown that a BP algorithm converges to the correct MAP/MWM assignment under a simple GM formulation of MWM, as long as the corresponding LP relaxation is tight. First, under the motivation for forcing the tightness condition, we consider a new GM formulation of MWM, say C-GM, using non-intersecting odd-sized cycles in the graph; the new corresponding LP relaxation, say C-LP, becomes tight for more MWM instances. However, the tightness of C-LP now does not guarantee such convergence and correctness of the new BP on C-GM. To address the issue, we introduce a novel graph transformation applied to C-GM, which results in another GM formulation of MWM, and prove that the respective BP on it converges to the correct MAP/MWM assignment, as long as C-LP is tight. Finally, we also show that C-LP always has half-integral solutions, which leads to an efficient BP-based MWM heuristic consisting of making sequential, “cutting plane”, modifications to the underlying GM. Our experiments show that this BP-based cutting plane heuristic performs, as well as that based on traditional LP solvers.


neural information processing systems | 2010

On Herding and the Perceptron Cycling Theorem

Andrew E. Gelfand; Yutian Chen; Laurens van der Maaten; Max Welling


national conference on artificial intelligence | 2011

Pushing the power of stochastic greedy ordering schemes for inference in graphical models

Kalev Kask; Andrew E. Gelfand; Lars Otten; Rina Dechter


uncertainty in artificial intelligence | 2010

BEEM: bucket elimination with external memory

Kalev Kask; Andrew E. Gelfand


neural information processing systems | 2013

A Graphical Transformation for Belief Propagation: Maximum Weight Matchings and Odd-Sized Cycles

Jinwoo Shin; Andrew E. Gelfand; Michael Chertkov


uncertainty in artificial intelligence | 2012

A Cluster-Cumulant Expansion at the Fixed Points of Belief Propagation.

Max Welling; Andrew E. Gelfand; Alexander T. Ihler


uncertainty in artificial intelligence | 2012

Generalized Belief Propagation on tree robust structured region graphs

Andrew E. Gelfand; Max Welling


national conference on artificial intelligence | 2011

Stopping rules for randomized greedy triangulation schemes

Andrew E. Gelfand; Kalev Kask; Rina Dechter

Collaboration


Dive into the Andrew E. Gelfand's collaboration.

Top Co-Authors

Avatar

Michael Chertkov

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Max Welling

University of Amsterdam

View shared research outputs
Top Co-Authors

Avatar

Kalev Kask

University of California

View shared research outputs
Top Co-Authors

Avatar

Rina Dechter

University of California

View shared research outputs
Top Co-Authors

Avatar

Yutian Chen

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lars Otten

University of California

View shared research outputs
Top Co-Authors

Avatar

Laurens van der Maaten

Delft University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge