Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Erik Murphy-Chutorian is active.

Publication


Featured researches published by Erik Murphy-Chutorian.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2009

Head Pose Estimation in Computer Vision: A Survey

Erik Murphy-Chutorian; Mohan M. Trivedi

The capacity to estimate the head pose of another person is a common human ability that presents a unique challenge for computer vision systems. Compared to face detection and recognition, which have been the primary foci of face-related vision research, identity-invariant head pose estimation has fewer rigorously evaluated systems or generic solutions. In this paper, we discuss the inherent difficulties in head pose estimation and present an organized survey describing the evolution of the field. Our discussion focuses on the advantages and disadvantages of each approach and spans 90 of the most innovative and characteristic papers that have been published on this topic. We compare these systems by focusing on their ability to estimate coarse and fine head pose, highlighting approaches that are well suited for unconstrained environments.


international conference on intelligent transportation systems | 2007

Head Pose Estimation for Driver Assistance Systems: A Robust Algorithm and Experimental Evaluation

Erik Murphy-Chutorian; Anup Doshi; Mohan M. Trivedi

Recognizing driver awareness is an important prerequisite for the design of advanced automotive safety systems. Since visual attention is constrained to a drivers field of view, knowing where a driver is looking provides useful cues about his activity and awareness of the environment. This work presents an identity-and lighting-invariant system to estimate a drivers head pose. The system is fully autonomous and operates online in daytime and nighttime driving conditions, using a monocular video camera sensitive to visible and near-infrared light. We investigate the limitations of alternative systems when operated in a moving vehicle and compare our approach, which integrates Localized Gradient Orientation histograms with support vector machines for regression. We estimate the orientation of the drivers head in two degrees-of-freedom and evaluate the accuracy of our method in a vehicular testbed equipped with a cinematic motion capture system.


IEEE Transactions on Intelligent Transportation Systems | 2010

Head Pose Estimation and Augmented Reality Tracking: An Integrated System and Evaluation for Monitoring Driver Awareness

Erik Murphy-Chutorian; Mohan M. Trivedi

Driver distraction and inattention are prominent causes of automotive collisions. To enable driver-assistance systems to address these problems, we require new sensing approaches to infer a drivers focus of attention. In this paper, we present a new procedure for static head-pose estimation and a new algorithm for visual 3-D tracking. They are integrated into the novel real-time (30 fps) system for measuring the position and orientation of a drivers head. This system consists of three interconnected modules that detect the drivers head, provide initial estimates of the heads pose, and continuously track its position and orientation in six degrees of freedom. The head-detection module consists of an array of Haar-wavelet Adaboost cascades. The initial pose estimation module employs localized gradient orientation (LGO) histograms as input to support vector regressors (SVRs). The tracking module provides a fine estimate of the 3-D motion of the head using a new appearance-based particle filter for 3-D model tracking in an augmented reality environment. We describe our implementation that utilizes OpenGL-optimized graphics hardware to efficiently compute particle samples in real time. To demonstrate the suitability of this system for real driving situations, we provide a comprehensive evaluation with drivers of varying ages, race, and sex spanning daytime and nighttime conditions. To quantitatively measure the accuracy of system, we compare its estimation results to a marker-based cinematic motion-capture system installed in the automotive testbed.


ieee intelligent vehicles symposium | 2008

HyHOPE: Hybrid Head Orientation and Position Estimation for vision-based driver head tracking

Erik Murphy-Chutorian; Mohan M. Trivedi

Driver distraction and inattention are prominent causes of automotive collisions. To enable driver assistance systems to address these problems, we require new sensing approaches to infer a driverpsilas focus of attention. In this paper, we present a new 3D tracking algorithm and integrate it into HyHOPE, a real-time (30 fps) hybrid head orientation and position estimation system for driver head tracking. With a single video camera, the system continuously tracks the head in six degrees-of-freedom, initializing itself automatically with separate modules for head detection and head pose estimation. The tracking module provides a fine estimate of the 3D motion of the head, using a new appearance-based algorithm for 3D model tracking by particle filtering in an augmented reality environment. We describe our implementation, which utilizes OpenGL-optimized graphics hardware to efficiently compute particle samples in real-time. To quantitatively evaluate the accuracy of our system, we compare its estimation results to a marker-based cinematic motion capture system installed in an automotive testbed. We evaluate the system on real daytime and nighttime drives with drivers of varying ages, race, and sex.


workshop on applications of computer vision | 2005

Shared Features for Scalable Appearance-Based Object Recognition

Erik Murphy-Chutorian; Jochen Triesch

We present a framework for learning object representations for fast recognition of a large number of different objects. Rather than learning and storing feature representations separately for each object, we create a finite set of representative features and share these features within and between different object models. In contrast to traditional recognition methods that scale linearly with the number of objects, the shared features can be exploited by bottom-up search algorithms which require a constant number of feature comparisons for any number of objects. We demonstrate the feasibility of this approach on a novel database of 50 everyday objects in cluttered real-world scenes. Using Gabor wavelet-response features extracted only at corner points, our system achieves good recognition results despite substantial occlusion and background clutter.


international conference on robotics and automation | 2004

Design of an anthropomorphic robot head for studying autonomous development and learning

Hyundo Kim; George York; Greg Burton; Erik Murphy-Chutorian; Jochen Triesch

We describe the design of an anthropomorphic robot head intended as a research platform for studying autonomously learning active vision systems. The robot head closely mimics the major degrees of freedom of the human neck/eye apparatus and allows a number of facial expressions. We show that our robot head can shift its direction of gaze at speeds which come close to that of human saccades. Since our design only makes use of low cost consumer grade components, it paves the way for widespread use of anthropomorphic robot heads in science, education, health-care, and entertainment.


automotive user interfaces and interactive vehicular applications | 2012

On the design and evaluation of robust head pose for visual user interfaces: algorithms, databases, and comparisons

Sujitha Martin; Ashish Tawari; Erik Murphy-Chutorian; Shinko Y. Cheng; Mohan M. Trivedi

An important goal in automotive user interface research is to predict a users reactions and behaviors in a driving environment. The behavior of both drivers and passengers can be studied by analyzing eye gaze, head, hand, and foot movement, upper body posture, etc. In this paper, we focus on estimating head pose, which has been shown to be a good predictor of driver intent and a good proxy for gaze estimation, and provide a valuable head pose database for future comparative studies. Most existing head pose estimation algorithms are still struggling under large spatial head turns. Our method, however, relies on using facial features that are visible even during large spatial head turns to estimate head pose. The method is evaluated on the LISA-P Head Pose database, which has head pose data from on-road daytime and nighttime drivers of varying age, race, and gender; ground truth for head pose is provided using a motion capture system. In special regards to eye gaze estimation for automotive user interface study, the automatic head pose estimation technique presented in this paper can replace previous eye gaze estimation methods that rely on manual data annotation or be used in conjunction with them when necessary.


british machine vision conference | 2006

N-tree Disjoint-Set Forests for Maximally Stable Extremal Regions

Erik Murphy-Chutorian; Mohan M. Trivedi

In this paper we introduce the NDS-Forest data structure, which can be used for the calculation and representation of Maximally Stable Extremal Regions in real-time video. In contrast to the standard MSER algorithm, the NDSForest stores information about the extremal regions as they are formed, making it unnecessary to regrow the regions from seed pixels. Using the NDSForest structure, we describe a system that uses MSERs in an automobile for face registration, segmentation, and pose estimation of the driver.


computer vision and pattern recognition | 2006

Semi-autonomous Learning of Objects

Hyundo Kim; Erik Murphy-Chutorian; Jochen Triesch

This paper presents a robotic vision system that can be taught to recognize novel objects in a semi-autonomous manner that does not require manual labeling or segmentation of any individual training images. Instead, unfamiliar objects are simply shown to the system in varying poses and scales against cluttered background and the system automatically detects, tracks, segments, and builds representations for these objects. We demonstrate the feasibility of our approach by training the system to recognize one hundred household objects, which are presented to the system for about a minute each. Our method resembles the way that biological organisms learn to recognize objects and it paves the way for a wealth of applications in robotics and other fields.


computer vision and pattern recognition | 2008

Particle filtering with rendered models: A two pass approach to multi-object 3D tracking with the GPU

Erik Murphy-Chutorian; Mohan M. Trivedi

We describe a new approach to vision-based 3D object tracking, using appearance-based particle filters to follow 3D model reconstructions. This method is targeted towards modern graphics processors, which are optimized for 3D reconstruction and are capable of highly parallel computation. We discuss an OpenGL implementation of this approach, which uses two rendering passes to update the particle filter weights. In the first pass, the system renders the previous object state estimates to an off-screen framebuffer. In the second pass, the system uses a programmable vertex shader to compute the mean normalized cross-correlation between each sample and the subsequent video frame. The particle filters are updated using the correlation scores and provide a full 3D track of the objects. We provide examples for tracking human heads in both single and multi-camera scenarios.

Collaboration


Dive into the Erik Murphy-Chutorian's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jochen Triesch

Frankfurt Institute for Advanced Studies

View shared research outputs
Top Co-Authors

Avatar

Hyundo Kim

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge