Dan Levi
General Motors
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Dan Levi.
computer vision and pattern recognition | 2012
Shaul Oron; Aharon Bar-Hillel; Dan Levi; Shai Avidan
Locally Orderless Tracking (LOT) is a visual tracking algorithm that automatically estimates the amount of local (dis)order in the object. This lets the tracker specialize in both rigid and deformable objects on-line and with no prior assumptions. We provide a probabilistic model of the object variations over time. The model is implemented using the Earth Movers Distance (EMD) with two parameters that control the cost of moving pixels and changing their color. We adjust these costs on-line during tracking to account for the amount of local (dis)order in the object. We show LOTs tracking capabilities on challenging video sequences, both commonly used and new, demonstrating performance comparable to state-of-the-art methods.
european conference on computer vision | 2010
Aharon Bar-Hillel; Dan Levi; Eyal Krupka; Chen Goldberg
We introduce a new approach for learning part-based object detection through feature synthesis. Our method consists of an iterative process of feature generation and pruning. A feature generation procedure is presented in which basic part-based features are developed into a feature hierarchy using operators for part localization, part refining and part combination. Feature pruning is done using a new feature selection algorithm for linear SVM, termed Predictive Feature Selection (PFS), which is governed by weight prediction. The algorithm makes it possible to choose from O(106) features in an efficient but accurate manner. We analyze the validity and behavior of PFS and empirically demonstrate its speed and accuracy advantages over relevant competitors. We present an empirical evaluation of our method on three human detection datasets including the current de-facto benchmarks (the INRIA and Caltech pedestrian datasets) and a new challenging dataset of children images in difficult poses. The evaluation suggests that our approach is on a par with the best current methods and advances the state-of-the-art on the Caltech pedestrian training dataset.
IEEE Transactions on Intelligent Transportation Systems | 2015
Francisco Vicente; Zehua Huang; Xuehan Xiong; Fernando De la Torre; Wende Zhang; Dan Levi
Distracted driving is one of the main causes of vehicle collisions in the United States. Passively monitoring a drivers activities constitutes the basis of an automobile safety system that can potentially reduce the number of accidents by estimating the drivers focus of attention. This paper proposes an inexpensive vision-based system to accurately detect Eyes Off the Road (EOR). The system has three main components: 1) robust facial feature tracking; 2) head pose and gaze estimation; and 3) 3-D geometric reasoning to detect EOR. From the video stream of a camera installed on the steering wheel column, our system tracks facial features from the drivers face. Using the tracked landmarks and a 3-D face model, the system computes head pose and gaze direction. The head pose estimation algorithm is robust to nonrigid face deformations due to changes in expressions. Finally, using a 3-D geometric analysis, the system reliably detects EOR.
british machine vision conference | 2015
Dan Levi; Noa Garnett; Ethan Fetaya
General obstacle detection is a key enabler for obstacle avoidance in mobile robotics and autonomous driving. In this paper we address the task of detecting the closest obstacle in each direction from a driving vehicle. As opposed to existing methods based on 3D sensing we use a single color camera. The main novelty in our approach is the reduction of the task to a column-wise regression problem. The regression is then solved using a deep convolutional neural network (CNN). In addition, we introduce a new loss function based on a semi-discrete representation of the obstacle position probability to train the network. The network is trained using ground truth automatically generated from a laser-scanner point cloud. Using the KITTI dataset, we show that the our monocularbased approach outperforms existing camera-based methods including ones using stereo. We also apply the network on the related task of road segmentation achieving among the best results on the KITTI road segmentation challenge.
international conference on computer vision | 2011
Aharon Bar-Hillel; Dmitri Hanukaev; Dan Levi
Category level object recognition has improved significantly in the last few years, but machine performance remains unsatisfactory for most real-world applications. We believe this gap may be bridged using additional depth information obtained from range imaging, which was recently used to overcome similar problems in body shape interpretation. This paper presents a system which successfully fuses visual and range imaging for object category classification. We explore fusion at multiple levels: using depth as an attention mechanism, high-level fusion at the classifier level and low-level fusion of local descriptors, and show that each mechanism makes a unique contribution to performance. For low-level fusion we present a new algorithm for training of local descriptors, the Generalized Image Feature Transform (GIFT), which generalizes current representations such as SIFT and spatial pyramids and allows for the creation of new representations based on multiple channels of information. We show that our system improves state-of-the-art visual-only and depth-only methods on a diverse dataset of every-day objects.
ieee intelligent vehicles symposium | 2016
Shay Dekel; Dan Levi; Michael Slutsky; Ilan Shimshoni
Autonomous vehicle driving in urban environments is a challenging task that requires localization accuracy exceeding that available from GPS-based inertial guidance systems. For map-based driving, a 3D laser scanner can be utilized to localize the vehicle within a previously recorded 3D map. Such scanners are however not feasible for mass production due to cost considerations. In this paper we present a localization algorithm that creates an off-line predefined map and then localizes with respect to this map. First, the map is constructed by a service vehicle equipped with a calibrated stereo camera rig and a high precision navigation system. Then, the global localization ego-pose can be obtained in any vehicle equipped with a standard GPS and a single forward looking camera for extracting and matching features to relevant map candidates. We use a recently proposed estimation method called SOREPP (Soft Optimization method for Robust Estimation based on Pose Priors) that utilizes relevant priors for achieving high performance, fast and reliable estimation, even with a small fraction of inliers. During the estimation it uses all the matched correspondences without need for random sampling to find the inliers. This method eventually obtains an outlier-free set of landmarks, used to estimate the ego-pose with high accuracy. We evaluate our algorithm on real world data comprised of a challenging 4.5km drive. Our algorithm achieves accurate localization results: a mean lateral absolute error of 14.35cm and a mean longitudinal absolute error of 18.63cm.
machine vision applications | 2014
Aharon Bar Hillel; Ronen Lerner; Dan Levi; Guy Raz
Archive | 2011
Aharon Bar Hillel; Dan Levi
Archive | 2011
Dan Levi; Aharon Bar Hillel
Archive | 2012
Dan Levi; Aharon Bar Hillel