Andrew E. Gelfand
University of California, Irvine
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Andrew E. Gelfand.
international symposium on information theory | 2013
Andrew E. Gelfand; Jinwoo Shin; Michael Chertkov
Belief Propagation (BP) is a popular, distributed heuristic for performing MAP computations in Graphical Models. BP can be interpreted, from a variational perspective, as minimizing the Bethe Free Energy (BFE). BP can also be used to solve a special class of Linear Programming (LP) problems. For this class of problems, MAP inference can be stated as an integer LP with an LP relaxation that coincides with minimization of the BFE at “zero temperature”. We generalize these prior results and establish a tight characterization of the LP problems that can be formulated as an equivalent LP relaxation of MAP inference. Moreover, we suggest an efficient, iterative annealing BP algorithm for solving this broader class of LP problems. We demonstrate the algorithms performance on a set of weighted matching problems by using it as a cutting plane method to solve a sequence of LPs tightened by adding “blossom” inequalities.
international conference on computer vision | 2011
Yutian Chen; Andrew E. Gelfand; Charless C. Fowlkes; Max Welling
We present a new method to combine possibly inconsistent locally (piecewise) trained conditional models p(yα∣xα) into pseudo-samples from a global model. Our method does not require training of a CRF, but instead generates samples by iterating forward a weakly chaotic dynamical system. The new method is illustrated on image segmentation tasks where classifiers based on local appearance cues are combined with pairwise boundary cues.
IEEE Transactions on Information Theory | 2018
Sungsoo Ahn; Michael Chertkov; Andrew E. Gelfand; Sejun Park; Jinwoo Shin
We study the maximum weight matching (MWM) problem for general graphs through the max-product belief propagation (BP) and related Linear Programming (LP). The BP approach provides distributed heuristics for finding the maximum a posteriori (MAP) assignment in a joint probability distribution represented by a graphical model (GM), and respective LPs can be considered as continuous relaxations of the discrete MAP problem. It was recently shown that a BP algorithm converges to the correct MAP/MWM assignment under a simple GM formulation of MWM, as long as the corresponding LP relaxation is tight. First, under the motivation for forcing the tightness condition, we consider a new GM formulation of MWM, say C-GM, using non-intersecting odd-sized cycles in the graph; the new corresponding LP relaxation, say C-LP, becomes tight for more MWM instances. However, the tightness of C-LP now does not guarantee such convergence and correctness of the new BP on C-GM. To address the issue, we introduce a novel graph transformation applied to C-GM, which results in another GM formulation of MWM, and prove that the respective BP on it converges to the correct MAP/MWM assignment, as long as C-LP is tight. Finally, we also show that C-LP always has half-integral solutions, which leads to an efficient BP-based MWM heuristic consisting of making sequential, “cutting plane”, modifications to the underlying GM. Our experiments show that this BP-based cutting plane heuristic performs, as well as that based on traditional LP solvers.
neural information processing systems | 2010
Andrew E. Gelfand; Yutian Chen; Laurens van der Maaten; Max Welling
national conference on artificial intelligence | 2011
Kalev Kask; Andrew E. Gelfand; Lars Otten; Rina Dechter
uncertainty in artificial intelligence | 2010
Kalev Kask; Andrew E. Gelfand
neural information processing systems | 2013
Jinwoo Shin; Andrew E. Gelfand; Michael Chertkov
uncertainty in artificial intelligence | 2012
Max Welling; Andrew E. Gelfand; Alexander T. Ihler
uncertainty in artificial intelligence | 2012
Andrew E. Gelfand; Max Welling
national conference on artificial intelligence | 2011
Andrew E. Gelfand; Kalev Kask; Rina Dechter