Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tamir Hazan is active.

Publication


Featured researches published by Tamir Hazan.


european conference on computer vision | 2006

Multi-way clustering using super-symmetric non-negative tensor factorization

Amnon Shashua; Ron Zass; Tamir Hazan

We consider the problem of clustering data into k ≥ 2 clusters given complex relations — going beyond pairwise — between the data points. The complex n-wise relations are modeled by an n-way array where each entry corresponds to an affinity measure over an n-tuple of data points. We show that a probabilistic assignment of data points to clusters is equivalent, under mild conditional independence assumptions, to a super-symmetric non-negative factorization of the closest hyper-stochastic version of the input n-way affinity array. We derive an algorithm for finding a local minimum solution to the factorization problem whose computational complexity is proportional to the number of n-tuple samples drawn from the data. We apply the algorithm to a number of visual interpretation problems including 3D multi-body segmentation and illumination-based clustering of human faces.


IEEE Transactions on Information Theory | 2010

Norm-Product Belief Propagation: Primal-Dual Message-Passing for Approximate Inference

Tamir Hazan; Amnon Shashua

Inference problems in graphical models can be represented as a constrained optimization of a free-energy function. In this paper, we treat both forms of probabilistic inference, estimating marginal probabilities of the joint distribution and finding the most probable assignment, through a unified message-passing algorithm architecture. In particular we generalize the belief propagation (BP) algorithms of sum-product and max-product and tree-reweighted (TRW) sum and max product algorithms (TRBP) and introduce a new set of convergent algorithms based on “convex-free-energy” and linear-programming (LP) relaxation as a zero-temperature of a convex-free-energy. The main idea of this work arises from taking a general perspective on the existing BP and TRBP algorithms while observing that they all are reductions from the basic optimization formula of f +Σihi where the function f is an extended-valued, strictly convex but nonsmooth and the functions hi are extended-valued functions (not necessarily convex). We use tools from convex duality to present the “primal-dual ascent” algorithm which is an extension of the Bregman successive projection scheme and is designed to handle optimization of the general type f + Σihi. We then map the fractional-free-energy variational principle for approximate inference onto the optimization formula above and introduce the “norm-product” message-passing algorithm. Special cases of the norm-product include sum-product and max-product (BP algorithms), TRBP and NMPLP algorithms. When the fractional-free-energy is set to be convex (convex-free-energy) the norm-product is globally convergent for the estimation of marginal probabilities and for approximating the LP-relaxation. We also introduce another branch of the norm-product which arises as the “zero-temperature” of the convex-free-energy which we refer to as the “convex-max-product”. The convex-max-product is convergent (unlike max-product) and aims at solving the LP- relaxation.


european conference on computer vision | 2012

Continuous markov random fields for robust stereo estimation

Koichiro Yamaguchi; Tamir Hazan; David A. McAllester; Raquel Urtasun

In this paper we present a novel slanted-plane model which reasons jointly about occlusion boundaries as well as depth. We formulate the problem as one of inference in a hybrid MRF composed of both continuous (i.e., slanted 3D planes) and discrete (i.e., occlusion boundaries) random variables. This allows us to define potentials encoding the ownership of the pixels that compose the boundary between segments, as well as potentials encoding which junctions are physically possible. Our approach outperforms the state-of-the-art on Middlebury high resolution imagery [1] as well as in the more challenging KITTI dataset [2], while being more efficient than existing slanted plane MRF methods, taking on average 2 minutes to perform inference on high resolution imagery.


computer vision and pattern recognition | 2012

Efficient structured prediction for 3D indoor scene understanding

Alexander G. Schwing; Tamir Hazan; Marc Pollefeys; Raquel Urtasun

Existing approaches to indoor scene understanding formulate the problem as a structured prediction task focusing on estimating the 3D bounding box which best describes the scene layout. Unfortunately, these approaches utilize high order potentials which are computationally intractable and rely on ad-hoc approximations for both learning and inference. In this paper we show that the potentials commonly used in the literature can be decomposed into pair-wise potentials by extending the concept of integral images to geometry. As a consequence no heuristic reduction of the search space is required. In practice, this results in large improvements in performance over the state-of-the-art, while being orders of magnitude faster.


computer vision and pattern recognition | 2011

Distributed message passing for large scale graphical models

Alexander G. Schwing; Tamir Hazan; Marc Pollefeys; Raquel Urtasun

In this paper we propose a distributed message-passing algorithm for inference in large scale graphical models. Our method can handle large problems efficiently by distributing and parallelizing the computation and memory requirements. The convergence and optimality guarantees of recently developed message-passing algorithms are preserved by introducing new types of consistency messages, sent between the distributed computers. We demonstrate the effectiveness of our approach in the task of stereo reconstruction from high-resolution imagery, and show that inference is possible with more than 200 labels in images larger than 10 MPixels.


computer vision and pattern recognition | 2008

A Parallel Decomposition Solver for SVM: Distributed dual ascend using Fenchel Duality

Tamir Hazan; Amit Man; Amnon Shashua

We introduce a distributed algorithm for solving large scale support vector machines (SVM) problems. The algorithm divides the training set into a number of processing nodes each running independently an SVM sub-problem associated with its subset of training data. The algorithm is a parallel (Jacobi) block-update scheme derived from the convex conjugate (Fenchel duality) form of the original SVM problem. Each update step consists of a modified SVM solver running in parallel over the sub-problems followed by a simple global update. We derive bounds on the number of updates showing that the number of iterations (independent SVM applications on sub-problems) required to obtain a solution of accuracy isin is O(log(1/isin)). We demonstrate the efficiency and applicability of our algorithms by running on large scale experiments on standardized datasets while comparing the results to the state-of-the-art SVM solvers.


international conference on acoustics, speech, and signal processing | 2011

PAC-Bayesian approach for minimization of phoneme error rate

Joseph Keshet; David A. McAllester; Tamir Hazan

We describe a new approach for phoneme recognition which aims at minimizing the phoneme error rate. Building on structured prediction techniques, we formulate the phoneme recognizer as a linear combination of feature functions. We state a PAC-Bayesian generalization bound, which gives an upper-bound on the expected phoneme error rate in terms of the empirical phoneme error rate. Our algorithm is derived by finding the gradient of the PAC-Bayesian bound and minimizing it by stochastic gradient descent. The resulting algorithm is iterative and easy to implement. Experiments on the TIMIT corpus show that our method achieves the lowest phoneme error rate compared to other discriminative and generative models with the same expressive power.


international conference on scale space and variational methods in computer vision | 2015

Probabilistic Correlation Clustering and Image Partitioning Using Perturbed Multicuts

Jörg Hendrik Kappes; Paul Swoboda; Bogdan Savchynskyy; Tamir Hazan; Christoph Schnörr

We exploit recent progress on globally optimal MAP inference by integer programming and perturbation-based approximations of the log-partition function. This enables to locally represent uncertainty of image partitions by approximate marginal distributions in a mathematically substantiated way, and to rectify local data term cues so as to close contours and to obtain valid partitions. Our approach works for any graphically represented problem instance of correlation clustering, which is demonstrated by an additional social network example.


computer vision and pattern recognition | 2014

Congruency-Based Reranking

Itai Ben-Shalom; Noga Levy; Lior Wolf; Nachum Dershowitz; Adiel Ben-Shalom; Roni Shweka; Yaacov Choueka; Tamir Hazan; Yaniv Bar

We present a tool for re-ranking the results of a specific query by considering the (n+1) × (n+1) matrix of pairwise similarities among the elements of the set of n retrieved results and the query itself. The re-ranking thus makes use of the similarities between the various results and does not employ additional sources of information. The tool is based on graphical Bayesian models, which reinforce retrieved items strongly linked to other retrievals, and on repeated clustering to measure the stability of the obtained associations. The utility of the tool is demonstrated within the context of visual search of documents from the Cairo Genizah and for retrieval of paintings by the same artist and in the same style.


The Visual Computer | 2018

Co-segmentation for space-time co-located collections

Hadar Averbuch-Elor; Johannes Kopf; Tamir Hazan; Daniel Cohen-Or

We present a co-segmentation technique for space-time co-located image collections. These prevalent collections capture various dynamic events, usually by multiple photographers, and may contain multiple co-occurring objects which are not necessarily part of the intended foreground object, resulting in ambiguities for traditional co-segmentation techniques. Thus, to disambiguate what the common foreground object is, we introduce a weakly supervised technique, where we assume only a small seed, given in the form of a single segmented image. We take a distributed approach, where local belief models are propagated and reinforced with similar images. Our technique progressively expands the foreground and background belief models across the entire collection. The technique exploits the power of the entire set of image without building a global model, and thus successfully overcomes large variability in appearance of the common foreground object. We demonstrate that our method outperforms previous co-segmentation techniques on challenging space-time co-located collections, including dense benchmark datasets which were adapted for our novel problem setting.

Collaboration


Dive into the Tamir Hazan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Amnon Shashua

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Tommi S. Jaakkola

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

David A. McAllester

Toyota Technological Institute at Chicago

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Francesco Orabona

Toyota Technological Institute at Chicago

View shared research outputs
Top Co-Authors

Avatar

Subhransu Maji

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar

Alon Cohen

Technion – Israel Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge