Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Eyal Krupka is active.

Publication


Featured researches published by Eyal Krupka.


human factors in computing systems | 2015

Accurate, Robust, and Flexible Real-time Hand Tracking

Toby Sharp; Cem Keskin; Jonathan Taylor; Jamie Shotton; David Kim; Christoph Rhemann; Ido Leichter; Alon Vinnikov; Yichen Wei; Daniel Freedman; Pushmeet Kohli; Eyal Krupka; Andrew W. Fitzgibbon; Shahram Izadi

We present a new real-time hand tracking system based on a single depth camera. The system can accurately reconstruct complex hand poses across a variety of subjects. It also allows for robust tracking, rapidly recovering from any temporary failures. Most uniquely, our tracker is highly flexible, dramatically improving upon previous approaches which have focused on front-facing close-range scenarios. This flexibility opens up new possibilities for human-computer interaction with examples including tracking at distances from tens of centimeters through to several meters (for controlling the TV at a distance), supporting tracking using a moving depth camera (for mobile scenarios), and arbitrary camera placements (for VR headsets). These features are achieved through a new pipeline that combines a multi-layered discriminative reinitialization strategy for per-frame pose estimation, followed by a generative model-fitting stage. We provide extensive technical details and a detailed qualitative and quantitative analysis.


european conference on computer vision | 2010

Part-based feature synthesis for human detection

Aharon Bar-Hillel; Dan Levi; Eyal Krupka; Chen Goldberg

We introduce a new approach for learning part-based object detection through feature synthesis. Our method consists of an iterative process of feature generation and pruning. A feature generation procedure is presented in which basic part-based features are developed into a feature hierarchy using operators for part localization, part refining and part combination. Feature pruning is done using a new feature selection algorithm for linear SVM, termed Predictive Feature Selection (PFS), which is governed by weight prediction. The algorithm makes it possible to choose from O(106) features in an efficient but accurate manner. We analyze the validity and behavior of PFS and empirically demonstrate its speed and accuracy advantages over relevant competitors. We present an empirical evaluation of our method on three human detection datasets including the current de-facto benchmarks (the INRIA and Caltech pedestrian datasets) and a new challenging dataset of children images in difficult poses. The evaluation suggests that our approach is on a par with the best current methods and advances the state-of-the-art on the Caltech pedestrian training dataset.


european conference on computer vision | 2014

SRA: Fast Removal of General Multipath for ToF Sensors

Daniel Freedman; Yoni Smolin; Eyal Krupka; Ido Leichter; Mirko Schmidt

A major issue with Time of Flight sensors is the presence of multipath interference. We present Sparse Reflections Analysis (SRA), an algorithm for removing this interference which has two main advantages. First, it allows for very general forms of multipath, including interference with three or more paths, diffuse multipath resulting from Lambertian surfaces, and combinations thereof. SRA removes this general multipath with robust techniques based on L 1 optimization. Second, due to a novel dimension reduction, we are able to produce a very fast version of SRA, which is able to run at frame rate. Experimental results on both synthetic data with ground truth, as well as real images of challenging scenes, validate the approach.


computer vision and pattern recognition | 2014

Discriminative Ferns Ensemble for Hand Pose Recognition

Eyal Krupka; Alon Vinnikov; Ben Klein; Aharon Bar Hillel; Daniel Freedman; Simon P. Stachniak

We present the Discriminative Ferns Ensemble (DFE) classifier for efficient visual object recognition. The classifier architecture is designed to optimize both classification speed and accuracy when a large training set is available. Speed is obtained using simple binary features and direct indexing into a set of tables, and accuracy by using a large capacity model and careful discriminative optimization. The proposed framework is applied to the problem of hand pose recognition in depth and infra-red images, using a very large training set. Both the accuracy and the classification time obtained are considerably superior to relevant competing methods, allowing one to reach accuracy targets with run times orders of magnitude faster than the competition. We show empirically that using DFE, we can significantly reduce classification time by increasing training sample size for a fixed target accuracy. Finally a DFE result is shown for the MNIST dataset, showing the methods merit extends beyond depth images.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2013

Monotonicity and Error Type Differentiability in Performance Measures for Target Detection and Tracking in Video

Ido Leichter; Eyal Krupka

There exists an abundance of systems and algorithms for multiple target detection and tracking in video, and many measures for evaluating the quality of their output have been proposed. The contribution of this paper lies in the following: first, it argues that such performance measures should have two fundamental properties - monotonicity and error type differentiability; second, it shows that the recently proposed measures do not have either of these properties and are thus less usable; third, it composes a set of simple measures, partly built on common practice, that does have these properties. The informativeness of the proposed set of performance measures is demonstrated through their application on face detection and tracking results.


computer vision and pattern recognition | 2012

Monotonicity and error type differentiability in performance measures for target detection and tracking in video

Ido Leichter; Eyal Krupka

There exists an abundance of systems and algorithms for multiple target detection and tracking in video, and many measures for evaluating the quality of their output have been proposed. The contribution of this paper lies in the following: first, it argues that such performance measures should have two fundamental properties — monotonicity and error type differentiability; second, it shows that the recently proposed measures do not have either of these properties and are thus less usable; third, it composes a set of simple measures, partly built on common practice, that does have these properties. The informativeness of the proposed set of performance measures is demonstrated through their application on face detection and tracking results.


international conference on image analysis and processing | 2009

Reducing Keypoint Database Size

Shahar Jamshy; Eyal Krupka; Yehezkel Yeshurun

Keypoints are high dimensional descriptors for local features of an image or an object. Keypoint extraction is the first task in various computer vision algorithms, where the keypoints are then stored in a database used as the basis for comparing images or image features. Keypoints may be based on image features extracted by feature detection operators or on a dense grid of features. Both ways produce a large number of features per image, causing both time and space performance challenges when upscaling the problem. We propose a novel framework for reducing the size of the keypoint database by learning which keypoints are beneficial for a specific application and using this knowledge to filter out a large portion of the keypoints. We demonstrate this approach on an object recognition application that uses a keypoint database. By using leave one out K nearest neighbor regression we significantly reduce the number of keypoints with relatively small reduction in performance.


human factors in computing systems | 2017

Toward Realistic Hands Gesture Interface: Keeping it Simple for Developers and Machines

Eyal Krupka; Kfir Karmon; Noam Bloom; Daniel Freedman; Ilya Gurvich; Aviv Hurvitz; Ido Leichter; Yoni Smolin; Yuval Tzairi; Alon Vinnikov; Aharon Bar-Hillel

Development of a rich hand-gesture-based interface is currently a tedious process, requiring expertise in computer vision and/or machine learning. We address this problem by introducing a simple language for pose and gesture description, a set of development tools for using it, and an algorithmic pipeline that recognizes it with high accuracy. The language is based on a small set of basic propositions, obtained by applying four predicate types to the fingers and to palm center: direction, relative location, finger touching and finger folding state. This enables easy development of a gesture-based interface, using coding constructs, gesture definition files or an editing GUI. The language is recognized from 3D camera input with an algorithmic pipeline composed of multiple classification/regression stages, trained on a large annotated dataset. Our experimental results indicate that the pipeline enables successful gesture recognition with a very low computational load, thus enabling a gesture-based interface on low-end processors.


Archive | 2014

Learning Fast Hand Pose Recognition

Eyal Krupka; Alon Vinnikov; Ben Klein; Aharon Bar-Hillel; Daniel Freedman; Simon P. Stachniak; Cem Keskin

Practical real-time hand pose recognition requires a classifier of high accuracy, running in a few millisecond speed. We present a novel classifier architecture, the Discriminative Ferns Ensemble (DFE), for addressing this challenge. The classifier architecture optimizes both classification speed and accuracy when a large training set is available. Speed is obtained using simple binary features and direct indexing into a set of tables, and accuracy by using a large capacity model and careful discriminative optimization. The proposed framework is applied to the problem of hand pose recognition in depth and infrared images, using a very large training set. Both the accuracy and the classification time obtained are considerably superior to relevant competing methods, allowing one to reach accuracy targets with runtime orders of magnitude faster than the competition. We show empirically that using DFE, we can significantly reduce classification time by increasing training sample size for a fixed target accuracy. Finally, scalability to a large number of classes is tested using a synthetically generated data set of \(81\) classes.


Archive | 2010

Event Matching in Social Networks

Eyal Krupka; Igor Abramovski; Igor Kviatkovsky

Collaboration


Dive into the Eyal Krupka's collaboration.

Top Co-Authors

Avatar

Ido Leichter

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Aharon Bar-Hillel

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Naftali Tishby

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge