Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ido Leichter is active.

Publication


Featured researches published by Ido Leichter.


human factors in computing systems | 2015

Accurate, Robust, and Flexible Real-time Hand Tracking

Toby Sharp; Cem Keskin; Jonathan Taylor; Jamie Shotton; David Kim; Christoph Rhemann; Ido Leichter; Alon Vinnikov; Yichen Wei; Daniel Freedman; Pushmeet Kohli; Eyal Krupka; Andrew W. Fitzgibbon; Shahram Izadi

We present a new real-time hand tracking system based on a single depth camera. The system can accurately reconstruct complex hand poses across a variety of subjects. It also allows for robust tracking, rapidly recovering from any temporary failures. Most uniquely, our tracker is highly flexible, dramatically improving upon previous approaches which have focused on front-facing close-range scenarios. This flexibility opens up new possibilities for human-computer interaction with examples including tracking at distances from tens of centimeters through to several meters (for controlling the TV at a distance), supporting tracking using a moving depth camera (for mobile scenarios), and arbitrary camera placements (for VR headsets). These features are achieved through a new pipeline that combines a multi-layered discriminative reinitialization strategy for per-frame pose estimation, followed by a generative model-fitting stage. We provide extensive technical details and a detailed qualitative and quantitative analysis.


Computer Vision and Image Understanding | 2010

Mean Shift tracking with multiple reference color histograms

Ido Leichter; Michael Lindenbaum; Ehud Rivlin

The Mean Shift tracker is a widely used tool for robustly and quickly tracking the location of an object in an image sequence using the objects color histogram. The reference histogram is typically set to that in the target region in the frame where the tracking is initiated. Often, however, no single view suffices to produce a reference histogram appropriate for tracking the target. In contexts where multiple views of the target are available prior to the tracking, this paper enhances the Mean Shift tracker to use multiple reference histograms obtained from these different target views. This is done while preserving both the convergence and the speed properties of the original tracker. We first suggest a simple method to use multiple reference histograms for producing a single histogram that is more appropriate for tracking the target. Then, to enhance the tracking further, we propose an extension to the Mean Shift tracker where the convex hull of these histograms is used as the target model. Many experimental results demonstrate the successful tracking of targets whose visible colors change drastically and rapidly during the sequence, where the basic Mean Shift tracker obviously fails.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2012

Mean Shift Trackers with Cross-Bin Metrics

Ido Leichter

Cross-bin metrics have been shown to be more suitable than bin-by-bin metrics for measuring the distance between histograms in various applications. In particular, a visual tracker that minimizes the earth movers distance (EMD) between the candidate and reference feature histograms has recently been proposed. This tracker was shown to be more robust than the Mean Shift tracker, which employs a bin-by-bin metric. In each frame, the former tracker iteratively shifts the candidate location by one pixel in the direction opposite to the EMDs gradient until no improvement is made. This optimization process involves the clustering of the candidate feature density in feature space, as well as the computation of the EMD between the candidate and reference feature histograms after each shift of the candidate location. In this paper, alternative trackers that employ cross-bin metrics as well, but that are based on Mean Shift (MS) iterations, are derived. The proposed trackers are simpler and faster due to 1) the use of MS-based optimization, which is not restricted to single pixel shifts, 2) abstention from any clustering of feature densities, and 3) abstention from EMD computations in multidimensional spaces.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2009

Tracking by Affine Kernel Transformations Using Color and Boundary Cues

Ido Leichter; Michael Lindenbaum; Ehud Rivlin

Kernel-based trackers aggregate image features within the support of a kernel (a mask) regardless of their spatial structure. These trackers spatially fit the kernel (usually in location and in scale) such that a function of the aggregate is optimized. We propose a kernel-based visual tracker that exploits the constancy of color and the presence of color edges along the target boundary. The tracker estimates the best affinity of a spatially aligned pair of kernels, one of which is color-related and the other of which is object boundary-related. In a sense, this work extends previous kernel-based trackers by incorporating the object boundary cue into the tracking process and by allowing the kernels to be affinely transformed instead of only translated and isotropically scaled. These two extensions make for more precise target localization. A more accurately localized target also facilitates safer updating of its reference color model, further enhancing the trackers robustness. The improved tracking is demonstrated for several challenging image sequences.


european conference on computer vision | 2014

SRA: Fast Removal of General Multipath for ToF Sensors

Daniel Freedman; Yoni Smolin; Eyal Krupka; Ido Leichter; Mirko Schmidt

A major issue with Time of Flight sensors is the presence of multipath interference. We present Sparse Reflections Analysis (SRA), an algorithm for removing this interference which has two main advantages. First, it allows for very general forms of multipath, including interference with three or more paths, diffuse multipath resulting from Lambertian surfaces, and combinations thereof. SRA removes this general multipath with robust techniques based on L 1 optimization. Second, due to a novel dimension reduction, we are able to produce a very fast version of SRA, which is able to run at frame rate. Experimental results on both synthetic data with ground truth, as well as real images of challenging scenes, validate the approach.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2008

Bittracker—A Bitmap Tracker for Visual Tracking under Very General Conditions

Ido Leichter; Michael Lindenbaum; Ehud Rivlin

This paper addresses the problem of visual tracking under very general conditions: a possibly non-rigid target whose appearance may drastically change over time; general camera motion; a 3D scene; and no a priori information except initialization. This is in contrast to the vast majority of trackers which rely on some limited model in which, for example, the targets appearance is known a priori or restricted, the scene is planar, or a pan tilt zoom camera is used. Their goal is to achieve speed and robustness, but their limited context may cause them to fail in the more general case. The proposed tracker works by approximating, in each frame, a PDF (probability distribution function) of the targets bitmap and then estimating the maximum a posteriori bitmap. The PDF is marginalized over all possible motions per pixel, thus avoiding the stage in which optical flow is determined. This is an advantage over other general-context trackers that do not use the motion cue at all or rely on the error-prone calculation of optical flow. Using a Gibbs distribution with respect to the first-order neighborhood system yields a bitmap PDF whose maximization may be transformed into that of a quadratic pseudo-Boolean function, the maximum of which is approximated via a reduction to a maximum-flow problem. Many experiments were conducted to demonstrate that the tracker is able to track under the aforementioned general context.


international conference on computer vision | 2009

Boundary ownership by lifting to 2.1D

Ido Leichter; Michael Lindenbaum

This paper addresses the “boundary ownership” problem, also known as the figure/ground assignment problem. Estimating boundary ownerships is a key step in perceptual organization: it allows higher-level processing to be applied on non-accidental shapes corresponding to figural regions. Existing methods for estimating the boundary ownerships for a given set of boundary curves model the probability distribution function (PDF) of the binary figure/ground random variables associated with the curves. Instead of modeling this PDF directly, the proposed method uses the 2.1D model: it models the PDF of the ordinal depths of the image segments enclosed by the curves. After this PDF is maximized, the boundary ownership of a curve is determined according to the ordinal depths of the two image segments it abuts. This method has two advantages: first, boundary ownership configurations inconsistent with every depth ordering (and thus very likely to be incorrect) are eliminated from consideration; second, it allows for the integration of cues related to image segments (not necessarily adjacent) in addition to those related to the curves. The proposed method models the PDF as a conditional random field (CRF) conditioned on cues related to the curves, T-junctions, and image segments. The CRF is formulated using learnt non-parametric distributions of the cues. The method significantly improves the currently achieved figure/ ground assignment accuracy, with 20.7% fewer errors in the Berkeley Segmentation Dataset.


international conference on computer vision | 2007

Visual Tracking by Affine Kernel Fitting Using Color and Object Boundary

Ido Leichter; Michael Lindenbaum; Ehud Rivlin

Kernel-based trackers aggregate image features within the support of a kernel (a mask) regardless of their spatial structure. These trackers spatially fit the kernel (usually in location and in scale) such that a function of the aggregate is optimized. We propose a kernel-based visual tracker that exploits the constancy of color and the presence of color edges along the target boundary. The tracker estimates the best affinity of a spatially aligned pair of kernels, one of which is color-related and the other of which is object boundary-related. In a sense, this work extends previous kernel-based trackers by incorporating the object boundary cue into the tracking process and by allowing the kernels to be affinely transformed instead of only translated and isotropically scaled. These two extensions make for more precise target localization. Moreover, a more accurately localized target facilitates safer updating of its reference color model, further enhancing the trackers robustness. The improved tracking is demonstrated for several challenging image sequences.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2013

Monotonicity and Error Type Differentiability in Performance Measures for Target Detection and Tracking in Video

Ido Leichter; Eyal Krupka

There exists an abundance of systems and algorithms for multiple target detection and tracking in video, and many measures for evaluating the quality of their output have been proposed. The contribution of this paper lies in the following: first, it argues that such performance measures should have two fundamental properties - monotonicity and error type differentiability; second, it shows that the recently proposed measures do not have either of these properties and are thus less usable; third, it composes a set of simple measures, partly built on common practice, that does have these properties. The informativeness of the proposed set of performance measures is demonstrated through their application on face detection and tracking results.


computer vision and pattern recognition | 2012

Monotonicity and error type differentiability in performance measures for target detection and tracking in video

Ido Leichter; Eyal Krupka

There exists an abundance of systems and algorithms for multiple target detection and tracking in video, and many measures for evaluating the quality of their output have been proposed. The contribution of this paper lies in the following: first, it argues that such performance measures should have two fundamental properties — monotonicity and error type differentiability; second, it shows that the recently proposed measures do not have either of these properties and are thus less usable; third, it composes a set of simple measures, partly built on common practice, that does have these properties. The informativeness of the proposed set of performance measures is demonstrated through their application on face detection and tracking results.

Collaboration


Dive into the Ido Leichter's collaboration.

Top Co-Authors

Avatar

Michael Lindenbaum

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ehud Rivlin

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge