Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel Freedman is active.

Publication


Featured researches published by Daniel Freedman.


human factors in computing systems | 2015

Accurate, Robust, and Flexible Real-time Hand Tracking

Toby Sharp; Cem Keskin; Jonathan Taylor; Jamie Shotton; David Kim; Christoph Rhemann; Ido Leichter; Alon Vinnikov; Yichen Wei; Daniel Freedman; Pushmeet Kohli; Eyal Krupka; Andrew W. Fitzgibbon; Shahram Izadi

We present a new real-time hand tracking system based on a single depth camera. The system can accurately reconstruct complex hand poses across a variety of subjects. It also allows for robust tracking, rapidly recovering from any temporary failures. Most uniquely, our tracker is highly flexible, dramatically improving upon previous approaches which have focused on front-facing close-range scenarios. This flexibility opens up new possibilities for human-computer interaction with examples including tracking at distances from tens of centimeters through to several meters (for controlling the TV at a distance), supporting tracking using a moving depth camera (for mobile scenarios), and arbitrary camera placements (for VR headsets). These features are achieved through a new pipeline that combines a multi-layered discriminative reinitialization strategy for per-frame pose estimation, followed by a generative model-fitting stage. We provide extensive technical details and a detailed qualitative and quantitative analysis.


european conference on computer vision | 2014

SRA: Fast Removal of General Multipath for ToF Sensors

Daniel Freedman; Yoni Smolin; Eyal Krupka; Ido Leichter; Mirko Schmidt

A major issue with Time of Flight sensors is the presence of multipath interference. We present Sparse Reflections Analysis (SRA), an algorithm for removing this interference which has two main advantages. First, it allows for very general forms of multipath, including interference with three or more paths, diffuse multipath resulting from Lambertian surfaces, and combinations thereof. SRA removes this general multipath with robust techniques based on L 1 optimization. Second, due to a novel dimension reduction, we are able to produce a very fast version of SRA, which is able to run at frame rate. Experimental results on both synthetic data with ground truth, as well as real images of challenging scenes, validate the approach.


computer vision and pattern recognition | 2014

Discriminative Ferns Ensemble for Hand Pose Recognition

Eyal Krupka; Alon Vinnikov; Ben Klein; Aharon Bar Hillel; Daniel Freedman; Simon P. Stachniak

We present the Discriminative Ferns Ensemble (DFE) classifier for efficient visual object recognition. The classifier architecture is designed to optimize both classification speed and accuracy when a large training set is available. Speed is obtained using simple binary features and direct indexing into a set of tables, and accuracy by using a large capacity model and careful discriminative optimization. The proposed framework is applied to the problem of hand pose recognition in depth and infra-red images, using a very large training set. Both the accuracy and the classification time obtained are considerably superior to relevant competing methods, allowing one to reach accuracy targets with run times orders of magnitude faster than the competition. We show empirically that using DFE, we can significantly reduce classification time by increasing training sample size for a fixed target accuracy. Finally a DFE result is shown for the MNIST dataset, showing the methods merit extends beyond depth images.


international symposium on mixed and augmented reality | 2016

Reality Skins: Creating Immersive and Tactile Virtual Environments

Lior Shapira; Daniel Freedman

Reality Skins enables mobile and large-scale virtual reality experiences, dynamically generated based on the users environment. A head-mounted display (HMD) coupled with a depth camera is used to scan the users surroundings: reconstruct geometry, infer floor plans, and detect objects and obstacles. From these elements we generate a Reality Skin, a 3D environment which replaces office or apartment walls with the corridors of a spaceship or underground tunnels, replacing chairs and desks, sofas and beds with crates and computer consoles, fungi and crumbling ancient statues. The placement of walls, furniture and objects in the Reality Skin attempts to approximate reality, such that the user can move around, and touch virtual objects with tactile feedback from real objects. Each possible reality skins world consists of objects, materials and custom scripts. Taking cues from the users surroundings, we create a unique environment combining these building blocks, attempting to preserve the geometry and semantics of the real world.We tackle 3D environment generation as a constraint satisfaction problem, and break it into two parts: First, we use a Markov Chain Monte-Carlo optimization, over a simple 2D polygonal model, to infer the layout of the environment (the structure of the virtual world). Then, we populate the world with various objects and characters, attempting to satisfy geometric (virtual objects should align with objects in the environment), semantic (a virtual chair aligns with a real one), physical (avoid collisions, maintain stability) and other constraints. We find a discrete set of transformations for each object satisfying unary constraints, incorporate pairwise and higher-order constraints, and optimize globally using a very recent technique based on semidefinite relaxation.


Computer Vision and Image Understanding | 2017

ASIST: Automatic semantically invariant scene transformation

Or Litany; Tal Remez; Daniel Freedman; Lior Shapira; Alexander M. Bronstein; Ran Gal

We present ASIST, a technique for transforming point clouds by replacing objects with their semantically equivalent counterparts. Transformations of this kind have applications in virtual reality, repair of fused scans, and robotics. ASIST is based on a unified formulation of semantic labeling and object replacement; both result from minimizing a single objective. We present numerical tools for the efficient solution of this optimization problem. The method is experimentally assessed on new datasets of both synthetic and real point clouds, and is additionally compared to two recent works on object replacement on data from the corresponding papers.


human factors in computing systems | 2017

Toward Realistic Hands Gesture Interface: Keeping it Simple for Developers and Machines

Eyal Krupka; Kfir Karmon; Noam Bloom; Daniel Freedman; Ilya Gurvich; Aviv Hurvitz; Ido Leichter; Yoni Smolin; Yuval Tzairi; Alon Vinnikov; Aharon Bar-Hillel

Development of a rich hand-gesture-based interface is currently a tedious process, requiring expertise in computer vision and/or machine learning. We address this problem by introducing a simple language for pose and gesture description, a set of development tools for using it, and an algorithmic pipeline that recognizes it with high accuracy. The language is based on a small set of basic propositions, obtained by applying four predicate types to the fingers and to palm center: direction, relative location, finger touching and finger folding state. This enables easy development of a gesture-based interface, using coding constructs, gesture definition files or an editing GUI. The language is recognized from 3D camera input with an algorithmic pipeline composed of multiple classification/regression stages, trained on a large annotated dataset. Our experimental results indicate that the pipeline enables successful gesture recognition with a very low computational load, thus enabling a gesture-based interface on low-end processors.


Archive | 2014

Learning Fast Hand Pose Recognition

Eyal Krupka; Alon Vinnikov; Ben Klein; Aharon Bar-Hillel; Daniel Freedman; Simon P. Stachniak; Cem Keskin

Practical real-time hand pose recognition requires a classifier of high accuracy, running in a few millisecond speed. We present a novel classifier architecture, the Discriminative Ferns Ensemble (DFE), for addressing this challenge. The classifier architecture optimizes both classification speed and accuracy when a large training set is available. Speed is obtained using simple binary features and direct indexing into a set of tables, and accuracy by using a large capacity model and careful discriminative optimization. The proposed framework is applied to the problem of hand pose recognition in depth and infrared images, using a very large training set. Both the accuracy and the classification time obtained are considerably superior to relevant competing methods, allowing one to reach accuracy targets with runtime orders of magnitude faster than the competition. We show empirically that using DFE, we can significantly reduce classification time by increasing training sample size for a fixed target accuracy. Finally, scalability to a large number of classes is tested using a synthetically generated data set of (81) classes.


Archive | 2014

FAST GENERAL MULTIPATH CORRECTION IN TIME-OF-FLIGHT IMAGING

Daniel Freedman; Eyal Krupka; Yoni Smolin; Ido Leichter; Mirko Schmidt


Archive | 2016

Molding and anchoring physically constrained virtual environments to real-world environments

Lior Shapira; Daniel Freedman


arXiv: Computer Vision and Pattern Recognition | 2018

SOSELETO: A Unified Approach to Transfer Learning and Training with Noisy Labels.

Or Litany; Daniel Freedman

Collaboration


Dive into the Daniel Freedman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ido Leichter

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge