Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ashish Tawari is active.

Publication


Featured researches published by Ashish Tawari.


IEEE Transactions on Multimedia | 2010

Speech Emotion Analysis: Exploring the Role of Context

Ashish Tawari; Mohan M. Trivedi

Automated analysis of human affective behavior has attracted increasing attention in recent years. With the research shift toward spontaneous behavior, many challenges have come to surface ranging from database collection strategies to the use of new feature sets (e.g., lexical cues apart from prosodic features). Use of contextual information, however, is rarely addressed in the field of affect expression recognition, yet it is evident that affect recognition by human is largely influenced by the context information. Our contribution in this paper is threefold. First, we introduce a novel set of features based on cepstrum analysis of pitch and intensity contours. We evaluate the usefulness of these features on two different databases: Berlin Database of emotional speech (EMO-DB) and locally collected audiovisual database in car settings (CVRRCar-AVDB). The overall recognition accuracy achieved for seven emotions in the EMO-DB database is over 84% and over 87% for three emotion classes in CVRRCar-AVDB. This is based on tenfold stratified cross validation. Second, we introduce the collection of a new audiovisual database in an automobile setting (CVRRCar-AVDB). In this current study, we only use the audio channel of the database. Third, we systematically analyze the effects of different contexts on two different databases. We present context analysis of subject and text based on speaker/text-dependent/-independent analysis on EMO-DB. Furthermore, we perform context analysis based on gender information on EMO-DB and CVRRCar-AVDB. The results based on these analyses are promising.


international conference on pattern recognition | 2010

Speech Emotion Analysis in Noisy Real-World Environment

Ashish Tawari; Mohan M. Trivedi

Automatic recognition of emotional states via speech signal has attracted increasing attention in recent years. A number of techniques have been proposed which are capable of providing reasonably high accuracy for controlled studio settings. However, their performance is considerably degraded when the speech signal is contaminated by noise. In this paper, we present a framework with adaptive noise cancellation as front end to speech emotion recognizer. We also introduce a new feature set based on cepstral analysis of pitch and energy contours. Experimental analysis shows promising results.


IEEE Transactions on Intelligent Transportation Systems | 2014

Continuous Head Movement Estimator for Driver Assistance: Issues, Algorithms, and On-Road Evaluations

Ashish Tawari; Sujitha Martin; Mohan M. Trivedi

Analysis of a drivers head behavior is an integral part of a driver monitoring system. In particular, the head pose and dynamics are strong indicators of a drivers focus of attention. Many existing state-of-the-art head dynamic analyzers are, however, limited to single-camera perspectives, which are susceptible to occlusion of facial features from spatially large head movements away from the frontal pose. Nonfrontal glances away from the road ahead, however, are of special interest since interesting events, which are critical to driver safety, occur during those times. In this paper, we present a distributed camera framework for head movement analysis, with emphasis on the ability to robustly and continuously operate even during large head movements. The proposed system tracks facial features and analyzes their geometric configuration to estimate the head pose using a 3-D model. We present two such solutions that additionally exploit the constraints that are present in a driving context and video data to improve tracking accuracy and computation time. Furthermore, we conduct a thorough comparative study with different camera configurations. For experimental evaluations, we collected a novel head pose data set from naturalistic on-road driving in urban streets and freeways, with particular emphasis on events inducing spatially large head movements (e.g., merge and lane change). Our analyses show promising results.


Computer Vision and Image Understanding | 2015

On surveillance for safety critical events

Eshed Ohn-Bar; Ashish Tawari; Sujitha Martin; Mohan M. Trivedi

A distributed camera-sensor system for driver assistance and situational awareness.Systematic, comparative evaluation of cues for prediction of safety critical events.Real-time prediction of overtaking and braking maneuvers.Detailed temporal analysis of the utility of various cues for maneuver prediction.Early prediction (1-2s) before the maneuver is shown on real-world data. We study techniques for monitoring and understanding real-world human activities, in particular of drivers, from distributed vision sensors. Real-time and early prediction of maneuvers is emphasized, specifically overtake and brake events. Study this particular domain is motivated by the fact that early knowledge of driver behavior, in concert with the dynamics of the vehicle and surrounding agents, can help to recognize dangerous situations. Furthermore, it can assist in developing effective warning and driver assistance systems. Multiple perspectives and modalities are captured and fused in order to achieve a comprehensive representation of the scene. Temporal activities are learned from a multi-camera head pose estimation module, hand and foot tracking, ego-vehicle parameters, lane and road geometry analysis, and surround vehicle trajectories. The system is evaluated on a challenging dataset of naturalistic driving in real-world settings.


intelligent vehicles symposium | 2014

Looking-in and looking-out vision for Urban Intelligent Assistance: Estimation of driver attentive state and dynamic surround for safe merging and braking

Ashish Tawari; Sayanan Sivaraman; Mohan M. Trivedi; Trevor Shannon; Mario Tippelhofer

This paper details the research, development, and demonstrations of real-world systems intended to assist the driver in urban environments, as part of the Urban Intelligent Assist (UIA) research initiative. A 3-year collaboration between Audi AG, Volkswagen Group of America Electronics Research Laboratory, and UC San Diego, the driver assistance portion of the UIA project focuses on two main use cases of vital importance in urban driving. The first, Driver Attention Guard, applies novel computer vision and machine learning research for accurately tracking the drivers head position and rotation using an array of cameras. The system then infers the drivers focus of attention, alerting the driver and engaging safety systems in case of extended driver inattention. The second application, Merge and Lane Change Assist, applies a novel probabilistic compact representation of the on-road environment, fusing data from a variety of sensor modalities. The system then computes safe and low-cost merge and lane-change maneuver recommendations. It communicates desired speeds to the driver via Head-up Display, when the driver touches the blinker, indicating his desired lane. The fully-implemented systems, complete with HMI, were demonstrated to the public and press in San Francisco in January of 2014.


intelligent vehicles symposium | 2014

Robust and continuous estimation of driver gaze zone by dynamic analysis of multiple face videos

Ashish Tawari; Mohan M. Trivedi

Analysis of drivers head behavior is an integral part of driver monitoring system. Drivers coarse gaze direction or gaze zone is a very important cue in understanding driver-state. Many existing gaze zone estimators are, however, limited to single camera perspectives, which are vulnerable to occlusions of facial features from spatially large head movements away from the frontal pose. Non-frontal glances away from the driving direction, though, are of special interest as interesting events, critical to driver safety, occur during those times. In this paper, we present a distributed camera framework for gaze zone estimation using head pose dynamics to operate robustly and continuously even during large head movements. For experimental evaluations, we collected a dataset from naturalistic on-road driving in urban streets and freeways. A human expert provided the gaze zone ground truth using all vision information including eyes and surround context. Our emphasis is to understand the efficacy of the head pose dynamic information in predicting eye-gaze-based zone ground truth. We conducted several experiments in designing the dynamic features and compared the performance against static head pose based approach. Analyses show that dynamic information significantly improves the results.


international conference on pattern recognition | 2014

Head, Eye, and Hand Patterns for Driver Activity Recognition

Eshed Ohn-Bar; Sujitha Martin; Ashish Tawari; Mohan M. Trivedi

In this paper, a multiview, multimodal vision framework is proposed in order to characterize driver activity based on head, eye, and hand cues. Leveraging the three types of cues allows for a richer description of the drivers state and for improved activity detection performance. First, regions of interest are extracted from two videos, one observing the drivers hands and one the drivers head. Next, hand location hypotheses are generated and integrated with a head pose and facial landmark module in order to classify driver activity into three states: wheel region interaction with two hands on the wheel, gear region activity, or instrument cluster region activity. The method is evaluated on a video dataset captured in on-road settings.


international conference on intelligent transportation systems | 2014

Where Is the Driver Looking: Analysis of Head, Eye and Iris for Robust Gaze Zone Estimation

Ashish Tawari; Kuo Hao Chen; Mohan M. Trivedi

Drivers gaze direction is a critical information in understanding driver state. In this paper, we present a distributed camera framework to estimate drivers coarse gaze direction using both head and eye cues. Coarse gaze direction is often sufficient in a number of applications, however, the challenge is to estimate gaze direction robustly in naturalistic real-world driving. Towards this end, we propose gaze-surrogate features estimated from eye region via eyelid and iris analysis. We present a novel iris detection computational framework. We are able to extract proposed features robustly and determine drivers gaze zone effectively. We evaluated the proposed system on a dataset, collected from naturalistic on-road driving in urban streets and freeways. A human expert annotated drivers gaze zone ground truth using information from the drivers eyes and the surrounding context. We conducted two experiments to compare the performance of the gaze zone estimation with and without eye cues. The head-alone experiment has a reasonably good result for most of the gaze zones with an overall 79.8% of weighted accuracy. By adding eye cues, the experimental result shows that the overall weighted accuracy is boosted to 94.9%, and all the individual gaze zones have a better true detection rate especially between the adjacent zones. Therefore, our experimental evaluations show efficacy of the proposed features and very promising results for robust gaze zone estimation.


automotive user interfaces and interactive vehicular applications | 2012

On the design and evaluation of robust head pose for visual user interfaces: algorithms, databases, and comparisons

Sujitha Martin; Ashish Tawari; Erik Murphy-Chutorian; Shinko Y. Cheng; Mohan M. Trivedi

An important goal in automotive user interface research is to predict a users reactions and behaviors in a driving environment. The behavior of both drivers and passengers can be studied by analyzing eye gaze, head, hand, and foot movement, upper body posture, etc. In this paper, we focus on estimating head pose, which has been shown to be a good predictor of driver intent and a good proxy for gaze estimation, and provide a valuable head pose database for future comparative studies. Most existing head pose estimation algorithms are still struggling under large spatial head turns. Our method, however, relies on using facial features that are visible even during large spatial head turns to estimate head pose. The method is evaluated on the LISA-P Head Pose database, which has head pose data from on-road daytime and nighttime drivers of varying age, race, and gender; ground truth for head pose is provided using a motion capture system. In special regards to eye gaze estimation for automotive user interface study, the automatic head pose estimation technique presented in this paper can replace previous eye gaze estimation methods that rely on manual data annotation or be used in conjunction with them when necessary.


intelligent vehicles symposium | 2014

Predicting driver maneuvers by learning holistic features

Eshed Ohn-Bar; Ashish Tawari; Sujitha Martin; Mohan M. Trivedi

In this work, we propose a framework for the recognition and prediction of driver maneuvers by considering holistic cues. With an array of sensors, drivers head, hand, and foot gestures are being captured in a synchronized manner together with lane, surrounding agents, and vehicle parameters. An emphasis is put on real-time algorithms. The cues are processed and fused using a latent-dynamic discriminative framework. As a case study, driver activity recognition and prediction in overtaking situations is performed using a naturalistic, on-road dataset. A consequence of this work would be in development of more effective driver analysis and assistance systems.

Collaboration


Dive into the Ashish Tawari's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sujitha Martin

University of California

View shared research outputs
Top Co-Authors

Avatar

Eshed Ohn-Bar

University of California

View shared research outputs
Top Co-Authors

Avatar

Ethan R. Duni

University of California

View shared research outputs
Top Co-Authors

Avatar

Cuong Tran

University of California

View shared research outputs
Top Co-Authors

Avatar

Anup Doshi

University of California

View shared research outputs
Top Co-Authors

Avatar

Jade Kwan

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge