Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Anup Doshi is active.

Publication


Featured researches published by Anup Doshi.


international conference on intelligent transportation systems | 2007

Head Pose Estimation for Driver Assistance Systems: A Robust Algorithm and Experimental Evaluation

Erik Murphy-Chutorian; Anup Doshi; Mohan M. Trivedi

Recognizing driver awareness is an important prerequisite for the design of advanced automotive safety systems. Since visual attention is constrained to a drivers field of view, knowing where a driver is looking provides useful cues about his activity and awareness of the environment. This work presents an identity-and lighting-invariant system to estimate a drivers head pose. The system is fully autonomous and operates online in daytime and nighttime driving conditions, using a monocular video camera sensitive to visible and near-infrared light. We investigate the limitations of alternative systems when operated in a moving vehicle and compare our approach, which integrates Localized Gradient Orientation histograms with support vector machines for regression. We estimate the orientation of the drivers head in two degrees-of-freedom and evaluate the accuracy of our method in a vehicular testbed equipped with a cinematic motion capture system.


systems man and cybernetics | 2009

A Novel Active Heads-Up Display for Driver Assistance

Anup Doshi; Shinko Yuanhsien Cheng; Mohan M. Trivedi

In this paper, we introduce a novel laser-based wide-area heads-up windshield display which is capable of actively interfacing with a human as part of a driver assistance system. The dynamic active display (DAD) is a unique prototype interface that presents safety-critical visual icons to the driver in a manner that minimizes the deviation of his or her gaze direction without adding to unnecessary visual clutter. As part of an automotive safety system, the DAD presents alerts in the field of view of the driver only if necessary, which is based upon the state and pose of the driver, vehicle, and environment. This paper examines the effectiveness of DAD through a comprehensive comparative experimental evaluation of a speed compliance driver assistance system, which is implemented on a vehicular test bed. Three different types of display protocols for assisting a driver to comply with speed limits are tested on actual roadways, and these are compared with a conventional dashboard display. Given the inclination, drivers who are given an overspeed warning alert reduced the time required to slow down to the speed limit by 38% (p < 0.01) as compared with the drivers not given the alert. Additionally, certain alerts decreased distraction levels by reducing the time spent looking away from the road by 63% (p < 0.01). Ultimately, these alerts demonstrate the utility and promise of the DAD system.


IEEE Pervasive Computing | 2011

On-road prediction of driver's intent with multimodal sensory cues

Anup Doshi; Brendan Morris; Mohan M. Trivedi

By predicting a drivers maneuvers before they occur, a driver-assistance system can prepare for or avoid dangerous situations. This article describes a real-time, on-road lane-change-intent detector that can enhance driver safety.


international conference on intelligent transportation systems | 2011

Tactical driver behavior prediction and intent inference: A review

Anup Doshi; Mohan M. Trivedi

Drawing upon fundamental research in human behavior prediction, recently there has been a research focus on how to predict driver behaviors. In this paper we review the field of driver behavior and intent prediction, with a specific focus on tactical maneuvers, as opposed to operational or strategic maneuvers. The aim of a driver behavior prediction system is to forecast the trajectory of the vehicle prior in real-time, which could allow a Driver Assistance System to compensate for dangerous or uncomfortable circumstances. This review provides insights into the scope of the problem, as well as the inputs, algorithms, performance metrics, and shortcomings in the state-of-the-art systems.


ieee intelligent vehicles symposium | 2011

Lane change intent prediction for driver assistance: On-road design and evaluation

Brendan Morris; Anup Doshi; Mohan M. Trivedi

Automobiles are quickly becoming more complex as new sensors and support systems are being added to improve safety and comfort. The next generation of intelligent driver assistance systems will need to utilize this wide array of sensors to fully understand the driving context and situation. Effective interaction requires these systems to examine the intentions, desires, and needs of the driver for preemptive actions which can help prepare for or avoid dangerous situations. This manuscript develops a real-time on-road prediction system able to detect a drivers intention to change lanes seconds before it occurs. In-depth analysis highlights the challenges when moving intent prediction from the laboratory to the road and provides detailed characterization of on-road performance.


Computer Vision and Image Understanding | 2012

Modeling and prediction of driver behavior by foot gesture analysis

Cuong Tran; Anup Doshi; Mohan M. Trivedi

Understanding driver behavior is an essential component in human-centric Intelligent Driver Assistance Systems. Specifically, driver foot behavior is an important factor in controlling the vehicle, though there have been very few research studies on analyzing foot behavior. While embedded pedal sensors may reveal some information about driver foot behavior, using vision-based foot behavior analysis has additional advantages. The foot movement before and after a pedal press can provide valuable information for better semantic understanding of driver behaviors, states, and styles. They can also be used to gain a time advantage in predicting a pedal press before it actually happens, which is very important for providing proper assistance to driver in time critical (e.g. safety related) situations. In this paper, we propose and develop a new vision based framework for driver foot behavior analysis using optical flow based foot tracking and a Hidden Markov Model (HMM) based technique to characterize the temporal foot behavior. In our experiment with a real-world driving testbed, we also use our trained HMM foot behavior model for prediction of brake and acceleration pedal presses. The experimental results over different subjects provided high accuracy (~94% on average) for both foot behavior state inference and pedal press prediction. By 133ms before the actual press, ~74% of the pedal presses were predicted correctly. This shows the promise of applying this approach for real-world driver assistance systems.


Journal of Vision | 2012

Head and eye gaze dynamics during visual attention shifts in complex environments

Anup Doshi; Mohan M. Trivedi

The dynamics of overt visual attention shifts evoke certain patterns of responses in eye and head movements. In this work, we detail novel findings regarding the interaction of eye gaze and head pose under various attention-switching conditions in complex environments and safety critical tasks such as driving. In particular, we find that sudden, bottom-up visual cues in the periphery evoke a different pattern of eye-head movement latencies as opposed to those during top-down, task-oriented attention shifts. In laboratory vehicle simulator experiments, a unique and significant (p < 0.05) pattern of preparatory head motions, prior to the gaze saccade, emerges in the top-down case. This finding is validated in qualitative analysis of naturalistic real-world driving data. These results demonstrate that measurements of eye-head dynamics are useful data for detecting driver distractions, as well as in classifying human attentive states in time and safety critical tasks.


computer vision and pattern recognition | 2010

Attention estimation by simultaneous observation of viewer and view

Anup Doshi; Mohan M. Trivedi

We introduce a new approach to analyzing the attentive state of a human subject, given cameras focused on the subject and their environment. In particular, the task of analyzing the focus of attention of a human driver is of primary concern. Up to 80% of automobile crashes are related to driver inattention; thus it is important for an Intelligent Driver Assistance System (IDAS) to be aware of the driver state. We present a new Bayesian paradigm for estimating human attention specifically addressing the problems arising in dynamic situations. The model incorporates vision-based gaze estimation, “top-down”- and “bottom-up”-based visual saliency maps, and cognitive considerations such as inhibition of return and center bias that affect the relationship between gaze and attention. Results demonstrate the validity on real driving data, showing quantitative improvements over systems using only gaze or only saliency, and elucidate the value of such a model for any human-machine interface.


ieee intelligent vehicles symposium | 2009

Investigating the relationships between gaze patterns, dynamic vehicle surround analysis, and driver intentions

Anup Doshi; Mohan M. Trivedi

Recent advances in driver behavior analysis for Active Safety have led to the ability to reliably predict certain driver intentions. Specifically, researchers have developed Advanced Driver Assistance Systems that produce an estimate of a drivers intention to change lanes, make an intersection turn, or brake, several seconds before the act itself. One integral feature in these systems is the analysis of driver visual search prior to a maneuver, using head pose and eye gaze as a proxy to determine focus of attention. However it is not clear whether visual distractions during a goal-oriented visual search could change the drivers behavior and thereby cause a degradation in the performance of the behavior analysis systems. In this paper we aim to determine whether it is feasible to use computer vision to determine whether a drivers visual search was affected by an external stimulus. A holistic ethnographic driving dataset is used as a basis to generate a motion-based visual saliency map of the scene. This map is correlated with predetermined eye gaze data in situations where a driver intends to change lanes. Results demonstrate the capability of this methodology to improve driver attention and behavior estimation, as well as intent prediction.


ieee intelligent vehicles symposium | 2008

A comparative exploration of eye gaze and head motion cues for lane change intent prediction

Anup Doshi; Mohan M. Trivedi

Driver behavioral cues may present a rich source of information and feedback for future intelligent driver assistance systems (IDAS). Two of the most useful cues might be eye gaze and head motion. Eye gaze provides a more accurate proxy than head motion for determining driver attention, whereas the measurement of head motion head motion as a derivative of pose is less cumbersome and more reliable in harsh driving conditions. With the design of a simple and robust IDAS in mind, we are interested in determining the most important driver cues for distinguishing driver intent. We use a lane change intent prediction system [1] to determine the relative usefulness of each cue for determining intent. Various combinations of input data are presented to a discriminative classifier, which is trained to output a prediction of probable lane change maneuver at a particular point in the future. Quantitative results using real-world data are presented and show that head motion, when combined with lane position and vehicle dynamics, is a reliable cue for lane change intent prediction. The addition of eye gaze does not improve performance as much as simpler head pose-based cues.

Collaboration


Dive into the Anup Doshi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cuong Tran

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ashish Tawari

University of California

View shared research outputs
Top Co-Authors

Avatar

Matthew H. Wilder

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar

Michael C. Mozer

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Thorsten O. Zander

Technical University of Berlin

View shared research outputs
Researchain Logo
Decentralizing Knowledge