Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael Werman is active.

Publication


Featured researches published by Michael Werman.


international conference on computer graphics and interactive techniques | 2002

Gradient domain high dynamic range compression

Raanan Fattal; Dani Lischinski; Michael Werman

We present a new method for rendering high dynamic range images on conventional displays. Our method is conceptually simple, computationally efficient, robust, and easy to use. We manipulate the gradient field of the luminance image by attenuating the magnitudes of large gradients. A new, low dynamic range image is then obtained by solving a Poisson equation on the modified gradient field. Our results demonstrate that the method is capable of drastic dynamic range compression, while preserving fine details and avoiding common artifacts, such as halos, gradient reversals, or loss of local contrast. The method is also able to significantly enhance ordinary images by bringing out detail in dark regions.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1995

Linear time Euclidean distance transform algorithms

Heinz Breu; Joseph Gil; David G. Kirkpatrick; Michael Werman

Two linear time (and hence asymptotically optimal) algorithms for computing the Euclidean distance transform of a two-dimensional binary image are presented. The algorithms are based on the construction and regular sampling of the Voronoi diagram whose sites consist of the unit (feature) pixels in the image. The first algorithm, which is of primarily theoretical interest, constructs the complete Voronoi diagram. The second, more practical, algorithm constructs the Voronoi diagram where it intersects the horizontal lines passing through the image pixel centers. Extensions to higher dimensional images and to other distance functions are also discussed. >


international conference on computer vision | 2009

Fast and robust Earth Mover's Distances

Ofir Pele; Michael Werman

We present a new algorithm for a robust family of Earth Movers Distances - EMDs with thresholded ground distances. The algorithm transforms the flow-network of the EMD so that the number of edges is reduced by an order of magnitude. As a result, we compute the EMD by an order of magnitude faster than the original algorithm, which makes it possible to compute the EMD on large histograms and databases. In addition, we show that EMDs with thresholded ground distances have many desirable properties. First, they correspond to the way humans perceive distances. Second, they are robust to outlier noise and quantization effects. Third, they are metrics. Finally, experimental results on image retrieval show that thresholding the ground distance of the EMD improves both accuracy and speed.


IEEE Transactions on Visualization and Computer Graphics | 2001

Texture mixing and texture movie synthesis using statistical learning

Ziv Bar-Joseph; Ran El-Yaniv; Dani Lischinski; Michael Werman

We present an algorithm based on statistical learning for synthesizing static and time-varying textures matching the appearance of an input texture. Our algorithm is general and automatic and it works well on various types of textures, including 1D sound textures, 2D texture images, and 3D texture movies. The same method is also used to generate 2D texture mixtures that simultaneously capture the appearance of a number of different input textures. In our approach, input textures are treated as sample signals generated by a stochastic process. We first construct a tree representing a hierarchical multiscale transform of the signal using wavelets. From this tree, new random trees are generated by learning and sampling the conditional probabilities of the paths in the original tree. Transformation of these random trees back into signals results in new random textures. In the case of 2D texture synthesis, our algorithm produces results that are generally as good as or better than those produced by previously described methods in this field. For texture mixtures, our results are better and more general than those produced by earlier methods. For texture movies, we present the first algorithm that is able to automatically generate movie clips of dynamic phenomena such as waterfalls, fire flames, a school of jellyfish, a crowd of people, etc. Our results indicate that the proposed technique is effective and robust.


computer vision and pattern recognition | 2004

Color lines: image specific color representation

Ido Omer; Michael Werman

The problem of deciding whether two pixels in an image have the same real world color is a fundamental problem in computer vision. Many color spaces are used in different applications for discriminating color from intensity to create an informative representation of color. The major drawback of all of these representations is that they assume no color distortion. In practice the colors of real world images are distorted both in the scene itself and in the image capturing process. In this work we introduce color lines, an image specific color representation that is robust to color distortion and provides a compact and useful representation of the colors in a scene.


european conference on computer vision | 2010

The quadratic-chi histogram distance family

Ofir Pele; Michael Werman

We present a new histogram distance family, the Quadratic-Chi (QC). QC members are Quadratic-Form distances with a cross-bin χ2-like normalization. The cross-bin χ2-like normalization reduces the effect of large bins having undo influence. Normalization was shown to be helpful in many cases, where the χ2 histogram distance outperformed the L2 norm. However, χ2 is sensitive to quantization effects, such as caused by light changes, shape deformations etc. The Quadratic-Form part of QC members takes care of cross-bin relationships (e.g. red and orange), alleviating the quantization problem. We present two new crossbin histogram distance properties: Similarity-Matrix-Quantization-Invariance and Sparseness-Invariance and show that QC distances have these properties.We also show that experimentally they boost performance. QC distances computation time complexity is linear in the number of non-zero entries in the bin-similarity matrix and histograms and it can easily be parallelized.We present results for image retrieval using the Scale Invariant Feature Transform (SIFT) and color image descriptors. In addition, we present results for shape classification using Shape Context (SC) and Inner Distance Shape Context (IDSC). We show that the new QC members outperform state of the art distances for these tasks, while having a short running time. The experimental results show that both the cross-bin property and the normalization are important.


european conference on computer vision | 2008

A Linear Time Histogram Metric for Improved SIFT Matching

Ofir Pele; Michael Werman

We present a new metric between histograms such as SIFT descriptors and a linear time algorithm for its computation. It is common practice to use the L 2 metric for comparing SIFT descriptors. This practice assumes that SIFT bins are aligned, an assumption which is often not correct due to quantization, distortion, occlusion etc. In this paper we present a new Earth Movers Distance (EMD) variant. We show that it is a metric (unlike the original EMD [1] which is a metric only for normalized histograms). Moreover, it is a natural extension of the L 1 metric. Second, we propose a linear time algorithm for the computation of the EMD variant, with a robust ground distance for oriented gradients. Finally, extensive experimental results on the Mikolajczyk and Schmid dataset [2] show that our method outperforms state of the art distances.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1993

Computing 2-D min, median, and max filters

Joseph Gil; Michael Werman

Fast algorithms for computing min, median, max, or any other order statistic filter transforms are described. The algorithms take constant time per pixel to compute min or max filters and polylog time per pixel, in the size of the filter, to compute the median filter. A logarithmic time per pixel lower bound for the computation of the median filter is shown. >


international conference on computer vision | 1995

Trilinearity of three perspective views and its associated tensor

Amnon Shashua; Michael Werman

It has been established that certain trilinear forms of three perspective views give rise to a tensor of 27 intrinsic coefficients. We show in this paper that a permutation of the the trilinear coefficients produces three homography matrices (projective transformations of planes) of three distinct intrinsic planes, respectively. This, in turn, yields the result that 3D invariants are recovered directly-simply by appropriate arrangement of the tensors coefficients. On a secondary level, we show new relations between fundamental matrix, epipoles, Euclidean structure and the trilinear tensor. On the practical side, the new results extend the existing envelope of methods of 3D recovery from 2D views-for example, new linear methods that cut through the epipolar geometry, and new methods for computing epipolar geometry using redundancy available across many views.<<ETX>>


Dyn3D '09 Proceedings of the DAGM 2009 Workshop on Dynamic 3D Imaging | 2009

Fusing Time-of-Flight Depth and Color for Real-Time Segmentation and Tracking

Amit Bleiweiss; Michael Werman

We present an improved framework for real-time segmentation and tracking by fusing depth and RGB color data. We are able to solve common problems seen in tracking and segmentation of RGB images, such as occlusions, fast motion, and objects of similar color. Our proposed real-time mean shift based algorithm outperforms the current state of the art and is significantly better in difficult scenarios.

Collaboration


Dive into the Michael Werman's collaboration.

Top Co-Authors

Avatar

Shmuel Peleg

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Daphna Weinshall

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Amnon Shashua

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Dani Lischinski

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Ofir Pele

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Gil Ben-Artzi

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Moshe Ben-Ezra

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Yacov Hel-Or

Interdisciplinary Center Herzliya

View shared research outputs
Top Co-Authors

Avatar

Ido Omer

Hebrew University of Jerusalem

View shared research outputs
Researchain Logo
Decentralizing Knowledge