Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lihi Zelnik-Manor is active.

Publication


Featured researches published by Lihi Zelnik-Manor.


computer vision and pattern recognition | 2001

Event-based analysis of video

Lihi Zelnik-Manor; Michal Irani

Dynamic events can be regarded as long-term temporal objects, which are characterized by spatio-temporal features at multiple temporal scales. Based on this, we design a simple statistical distance measure between video sequences (possibly of different lengths) based on their behavioral content. This measure is non-parametric and can thus handle a wide range of dynamic events. We use this measure for isolating and clustering events within long continuous video sequences. This is done without prior knowledge of the types of events, their models, or their temporal extent. An outcome of such a clustering process is a temporal segmentation of long video sequences into event-consistent sub-sequences, and their grouping into event-consistent clusters. Our event representation and associated distance measure can also be used for event-based indexing into long video sequences, even when only one short example-clip is available. However, when multiple example-clips of the same event are available (either as a result of the clustering process, or given manually), these can be used to refine the event representation, the associated distance measure, and accordingly the quality of the detection and clustering process.


computer vision and pattern recognition | 2005

Beyond pairwise clustering

Sameer Agarwal; Jongwoo Lim; Lihi Zelnik-Manor; Pietro Perona; David J. Kriegman; Serge J. Belongie

We consider the problem of clustering in domains where the affinity relations are not dyadic (pairwise), but rather triadic, tetradic or higher. The problem is an instance of the hypergraph partitioning problem. We propose a two-step algorithm for solving this problem. In the first step we use a novel scheme to approximate the hypergraph using a weighted graph. In the second step a spectral partitioning algorithm is used to partition the vertices of this graph. The algorithm is capable of handling hyperedges of all orders including order two, thus incorporating information of all orders simultaneously. We present a theoretical analysis that relates our algorithm to an existing hypergraph partitioning algorithm and explain the reasons for its superior performance. We report the performance of our algorithm on a variety of computer vision problems and compare it to several existing hypergraph partitioning algorithms.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2006

Statistical analysis of dynamic actions

Lihi Zelnik-Manor; Michal Irani

Real-world action recognition applications require the development of systems which are fast, can handle a large variety of actions without a priori knowledge of the type of actions, need a minimal number of parameters, and necessitate as short as possible learning stage. In this paper, we suggest such an approach. We regard dynamic activities as long-term temporal objects, which are characterized by spatio-temporal features at multiple temporal scales. Based on this, we design a simple statistical distance measure between video sequences which captures the similarities in their behavioral content. This measure is nonparametric and can thus handle a wide range of complex dynamic actions. Having a behavior-based distance measure between sequences, we use it for a variety of tasks, including: video indexing, temporal segmentation, and action-based video clustering. These tasks are performed without prior knowledge of the types of actions, their models, or their temporal extents


computer vision and pattern recognition | 2003

Degeneracies, dependencies and their implications in multi-body and multi-sequence factorizations

Lihi Zelnik-Manor; Michal Irani

The body of work on multi-body factorization separates between objects whose motions are independent. In this work we show that in many cases objects moving with different 3D motions will be captured as a single object using these approaches. We analyze what causes these degeneracies between objects and suggest an approach for overcoming some of them. We further show that in the case of multiple sequences linear dependencies can supply information for temporal synchronization of sequences and for spatial matching of points across sequences.


IEEE Transactions on Signal Processing | 2011

Sensing Matrix Optimization for Block-Sparse Decoding

Lihi Zelnik-Manor; Kevin Rosenblum; Yonina C. Eldar

Recent work has demonstrated that using a carefully designed sensing matrix rather than a random one, can improve the performance of compressed sensing. In particular, a well-designed sensing matrix can reduce the coherence between the atoms of the equivalent dictionary, and as a consequence, reduce the reconstruction error. In some applications, the signals of interest can be well approximated by a union of a small number of subspaces (e.g., face recognition and motion segmentation). This implies the existence of a dictionary which leads to block-sparse representations. In this work, we propose a framework for sensing matrix design that improves the ability of block-sparse approximation techniques to reconstruct and classify signals. This method is based on minimizing a weighted sum of the interblock coherence and the subblock coherence of the equivalent dictionary. Our experiments show that the proposed algorithm significantly improves signal recovery and classification ability of the Block-OMP algorithm compared to sensing matrix optimization methods that do not employ block structure.


computer vision and pattern recognition | 2005

Hybrid models for human motion recognition

Claudio Fanti; Lihi Zelnik-Manor; Pietro Perona

Probabilistic models have been previously shown to be efficient and effective for modeling and recognition of human motion. In particular we focus on methods which represent the human motion model as a triangulated graph. Previous approaches learned models based just on positions and velocities of the body parts while ignoring their appearance. Moreover, a heuristic approach was commonly used to obtain translation invariance. In this paper we suggest an improved approach for learning such models and using them for human motion recognition. The suggested approach combines multiple cues, i.e., positions, velocities and appearance into both the learning and detection phases. Furthermore, we introduce global variables in the model, which can represent global properties such as translation, scale or view-point. The model is learned in an unsupervised manner from unlabelled data. We show that the suggested hybrid probabilistic model (which combines global variables, like translation, with local variables, like relative positions and appearances of body parts), leads to: (i) faster convergence of learning phase, (it) robustness to occlusions, and, (Hi) higher recognition rate.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2000

Multi-frame estimation of planar motion

Lihi Zelnik-Manor; Michal Irani

Traditional plane alignment techniques are typically performed between pairs of frames. We present a method for extending existing two-frame planar motion estimation techniques into a simultaneous multi-frame estimation, by exploiting multi-frame subspace constraints of planar surfaces. The paper has three main contributions: 1) we show that when the camera calibration does not change, the collection of all parametric image motions of a planar surface in the scene across multiple frames is embedded in a low dimensional linear subspace; 2) we show that the relative image motion of multiple planar surfaces across multiple frames is embedded in a yet lower dimensional linear subspace, even with varying camera calibration; and 3) we show how these multi-frame constraints can be incorporated into simultaneous multi-frame estimation of planar motion, without explicitly recovering any 3D information, or camera calibration. The resulting multi-frame estimation process is more constrained than the individual two-frame estimations, leading to more accurate alignment, even when applied to small image regions.


computer vision and pattern recognition | 2008

A walk through the web’s video clips

Sara Zanetti; Lihi Zelnik-Manor; Pietro Perona

Approximately 105 video clips are posted every day on the Web. The popularity of Web-based video databases poses a number of challenges to machine vision scientists: how do we organize, index and search such large wealth of data? Content-based video search and classification have been proposed in the literature and applied successfully to analyzing movies, TV broadcasts and lab-made videos. We explore the performance of some of these algorithms on a large data-set of approximately 3000 videos. We collected our data-set directly from the Web minimizing bias for content or quality, way so as to have a faithful representation of the statistics of this medium. We find that the algorithms that we have come to trust do not work well on video clips, because their quality is lower and their subject is more varied. We will make the data publicly available to encourage further research.


computer vision and pattern recognition | 2012

On SIFTs and their scales

Tal Hassner; Viki Mayzels; Lihi Zelnik-Manor

Scale invariant feature detectors often find stable scales in only a few image pixels. Consequently, methods for feature matching typically choose one of two extreme options: matching a sparse set of scale invariant features, or dense matching using arbitrary scales. In this paper we turn our attention to the overwhelming majority of pixels, those where stable scales are not found by standard techniques. We ask, is scale-selection necessary for these pixels, when dense, scale-invariant matching is required and if so, how can it be achieved? We make the following contributions: (i) We show that features computed over different scales, even in low-contrast areas, can be different; selecting a single scale, arbitrarily or otherwise, may lead to poor matches when the images have different scales. (ii) We show that representing each pixel as a set of SIFTs, extracted at multiple scales, allows for far better matches than single-scale descriptors, but at a computational price. Finally, (iii) we demonstrate that each such set may be accurately represented by a low-dimensional, linear subspace. A subspace-to-point mapping may further be used to produce a novel descriptor representation, the Scale-Less SIFT (SLS), as an alternative to single-scale descriptors. These claims are verified by quantitative and qualitative tests, demonstrating significant improvements over existing methods.


Computer Graphics Forum | 2010

Puzzle-like Collage

Stas Goferman; Ayellet Tal; Lihi Zelnik-Manor

Collages have been a common form of artistic expression since their first appearance in China around 200 BC. Recently, with the advance of digital cameras and digital image editing tools, collages have gained popularity also as a summarization tool. This paper proposes an approach for automating collage construction, which is based on assembling regions of interest of arbitrary shape in a puzzle‐like manner. We show that this approach produces collages that are informative, compact, and eye‐pleasing. This is obtained by following artistic principles and assembling the extracted cutouts such that their shapes complete each other.

Collaboration


Dive into the Lihi Zelnik-Manor's collaboration.

Top Co-Authors

Avatar

Michal Irani

Weizmann Institute of Science

View shared research outputs
Top Co-Authors

Avatar

Pietro Perona

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tal Hassner

Open University of Israel

View shared research outputs
Top Co-Authors

Avatar

Ayellet Tal

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Viki Mayzels

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sara Zanetti

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

George Leifman

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Itamar Friedman

Technion – Israel Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge