Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Eshed Ohn-Bar is active.

Publication


Featured researches published by Eshed Ohn-Bar.


computer vision and pattern recognition | 2013

Joint Angles Similarities and HOG2 for Action Recognition

Eshed Ohn-Bar; Mohan M. Trivedi

We propose a set of features derived from skeleton tracking of the human body and depth maps for the purpose of action recognition. The descriptors proposed are easy to implement, produce relatively small-sized feature sets, and the multi-class classification scheme is fast and suitable for real-time applications. We intuitively characterize actions using pairwise affinities between view-invariant joint angles features over the performance of an action. Additionally, a new descriptor for spatio-temporal feature extraction from color and depth images is introduced. This descriptor involves an application of a modified histogram of oriented gradients (HOG) algorithm. The application produces a feature set at every frame, and these features are collected into a 2D array which then the same algorithm is applied to again (the approach is termed HOG2). Both feature sets are evaluated in a bag-of-words scheme using a linear SVM, showing state-of-the-art results on public datasets from different domains of human-computer interaction.


IEEE Transactions on Intelligent Transportation Systems | 2014

Hand Gesture Recognition in Real-Time for Automotive Interfaces: A Multimodal Vision-based Approach and Evaluations

Eshed Ohn-Bar; Mohan M. Trivedi

In this paper, we develop a vision-based system that employs a combined RGB and depth descriptor to classify hand gestures. The method is studied for a human-machine interface application in the car. Two interconnected modules are employed: one that detects a hand in the region of interaction and performs user classification, and another that performs gesture recognition. The feasibility of the system is demonstrated using a challenging RGBD hand gesture data set collected under settings of common illumination variation and occlusion.


IEEE Transactions on Intelligent Transportation Systems | 2015

Learning to Detect Vehicles by Clustering Appearance Patterns

Eshed Ohn-Bar; Mohan M. Trivedi

This paper studies efficient means in dealing with intracategory diversity in object detection. Strategies for occlusion and orientation handling are explored by learning an ensemble of detection models from visual and geometrical clusters of object instances. An AdaBoost detection scheme is employed with pixel lookup features for fast detection. The analysis provides insight into the design of a robust vehicle detection system, showing promise in terms of detection performance and orientation estimation accuracy.


IEEE Transactions on Intelligent Vehicles | 2016

Looking at Humans in the Age of Self-Driving and Highly Automated Vehicles

Eshed Ohn-Bar; Mohan M. Trivedi

This paper highlights the role of humans in the next generation of driver assistance and intelligent vehicles. Understanding, modeling, and predicting human agents are discussed in three domains where humans and highly automated or self-driving vehicles interact: 1) inside the vehicle cabin, 2) around the vehicle, and 3) inside surrounding vehicles. Efforts within each domain, integrative frameworks across domains, and scientific tools required for future developments are discussed to provide a human-centered perspective on research in intelligent vehicles.


Computer Vision and Image Understanding | 2015

On surveillance for safety critical events

Eshed Ohn-Bar; Ashish Tawari; Sujitha Martin; Mohan M. Trivedi

A distributed camera-sensor system for driver assistance and situational awareness.Systematic, comparative evaluation of cues for prediction of safety critical events.Real-time prediction of overtaking and braking maneuvers.Detailed temporal analysis of the utility of various cues for maneuver prediction.Early prediction (1-2s) before the maneuver is shown on real-world data. We study techniques for monitoring and understanding real-world human activities, in particular of drivers, from distributed vision sensors. Real-time and early prediction of maneuvers is emphasized, specifically overtake and brake events. Study this particular domain is motivated by the fact that early knowledge of driver behavior, in concert with the dynamics of the vehicle and surrounding agents, can help to recognize dangerous situations. Furthermore, it can assist in developing effective warning and driver assistance systems. Multiple perspectives and modalities are captured and fused in order to achieve a comprehensive representation of the scene. Temporal activities are learned from a multi-camera head pose estimation module, hand and foot tracking, ego-vehicle parameters, lane and road geometry analysis, and surround vehicle trajectories. The system is evaluated on a challenging dataset of naturalistic driving in real-world settings.


international conference on pattern recognition | 2014

Head, Eye, and Hand Patterns for Driver Activity Recognition

Eshed Ohn-Bar; Sujitha Martin; Ashish Tawari; Mohan M. Trivedi

In this paper, a multiview, multimodal vision framework is proposed in order to characterize driver activity based on head, eye, and hand cues. Leveraging the three types of cues allows for a richer description of the drivers state and for improved activity detection performance. First, regions of interest are extracted from two videos, one observing the drivers hands and one the drivers head. Next, hand location hypotheses are generated and integrated with a head pose and facial landmark module in order to classify driver activity into three states: wheel region interaction with two hands on the wheel, gear region activity, or instrument cluster region activity. The method is evaluated on a video dataset captured in on-road settings.


international conference on pattern recognition | 2016

To boost or not to boost? On the limits of boosted trees for object detection

Eshed Ohn-Bar; Mohan M. Trivedi

We aim to study the modeling limitations of the commonly employed boosted decision trees classifier. Inspired by the success of large, data-hungry visual recognition models (e.g. deep convolutional neural networks), this paper focuses on the relationship between modeling capacity of the weak learners, dataset size, and dataset properties. A set of novel experiments on the Caltech Pedestrian Detection benchmark results in the best known performance among non-CNN techniques while operating at fast run-time speed. Furthermore, the performance is on par with deep architectures (9.71% log-average miss rate), while using only HOG+LUV channels as features. The conclusions from this study are shown to generalize over different object detection domains as demonstrated on the FDDB face detection benchmark (93.37% accuracy). Despite the impressive performance, this study reveals the limited modeling capacity of the common boosted trees model, motivating a need for architectural changes in order to compete with multi-level and very deep architectures.


international conference on intelligent transportation systems | 2015

On Performance Evaluation of Driver Hand Detection Algorithms: Challenges, Dataset, and Metrics

Nikhil Das; Eshed Ohn-Bar; Mohan M. Trivedi

Hands are used by drivers to perform primary and secondary tasks in the car. Hence, the study of driver hands has several potential applications, from studying driver behavior and alertness analysis to infotainment and human-machine interaction features. The problem is also relevant to other domains of robotics and engineering which involve cooperation with humans. In order to study this challenging computer vision and machine learning task, our paper introduces an extensive, public, naturalistic videobased hand detection dataset in the automotive environment. The dataset highlights the challenges that may be observed in naturalistic driving settings, from different background complexities, illumination settings, users, and viewpoints. In each frame, hand bounding boxes are provided, as well as left/right, driver/passenger, and number of hands on the wheel annotations. Comparison with an existing hand detection datasets highlights the novel characteristics of the proposed dataset.


intelligent vehicles symposium | 2014

Predicting driver maneuvers by learning holistic features

Eshed Ohn-Bar; Ashish Tawari; Sujitha Martin; Mohan M. Trivedi

In this work, we propose a framework for the recognition and prediction of driver maneuvers by considering holistic cues. With an array of sensors, drivers head, hand, and foot gestures are being captured in a synchronized manner together with lane, surrounding agents, and vehicle parameters. An emphasis is put on real-time algorithms. The cues are processed and fused using a latent-dynamic discriminative framework. As a case study, driver activity recognition and prediction in overtaking situations is performed using a naturalistic, on-road dataset. A consequence of this work would be in development of more effective driver analysis and assistance systems.


computer vision and pattern recognition | 2014

Fast and Robust Object Detection Using Visual Subcategories

Eshed Ohn-Bar; Mohan M. Trivedi

Object classes generally contain large intra-class variation, which poses a challenge to object detection schemes. In this work, we study visual subcategorization as a means of capturing appearance variation. First, training data is clustered using color and gradient features. Second, the clustering is used to learn an ensemble of models that capture visual variation due to varying orientation, truncation, and occlusion degree. Fast object detection is achieved with integral image features and pixel lookup features. The framework is studied in the context of vehicle detection on the challenging KITTI dataset.

Collaboration


Dive into the Eshed Ohn-Bar's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sujitha Martin

University of California

View shared research outputs
Top Co-Authors

Avatar

Ashish Tawari

University of California

View shared research outputs
Top Co-Authors

Avatar

Akshay Rangesh

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cuong Tran

University of California

View shared research outputs
Top Co-Authors

Avatar

Kevan Yuen

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge