Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sujitha Martin is active.

Publication


Featured researches published by Sujitha Martin.


ieee intelligent vehicles symposium | 2013

Driver classification and driving style recognition using inertial sensors

Minh Van Ly; Sujitha Martin; Mohan M. Trivedi

Currently there are many research focused on using smartphone as a data collection device. Many have shown its sensors ability to replace a lab test bed. These inertial sensors can be used to segment and classify driving events fairly accurately. In this research we explore the possibility of using the vehicles inertial sensors from the CAN bus to build a profile of the driver to ultimately provide proper feedback to reduce the number of dangerous car maneuver. Braking and turning events are better at characterizing an individual compared to acceleration events. Histogramming the time-series values of the sensor data does not help performance. Furthermore, combining turning and braking events helps better differentiate between two similar drivers when using supervised learning techniques compared to separate events alone, albeit with anemic performance.


IEEE Transactions on Intelligent Transportation Systems | 2014

Continuous Head Movement Estimator for Driver Assistance: Issues, Algorithms, and On-Road Evaluations

Ashish Tawari; Sujitha Martin; Mohan M. Trivedi

Analysis of a drivers head behavior is an integral part of a driver monitoring system. In particular, the head pose and dynamics are strong indicators of a drivers focus of attention. Many existing state-of-the-art head dynamic analyzers are, however, limited to single-camera perspectives, which are susceptible to occlusion of facial features from spatially large head movements away from the frontal pose. Nonfrontal glances away from the road ahead, however, are of special interest since interesting events, which are critical to driver safety, occur during those times. In this paper, we present a distributed camera framework for head movement analysis, with emphasis on the ability to robustly and continuously operate even during large head movements. The proposed system tracks facial features and analyzes their geometric configuration to estimate the head pose using a 3-D model. We present two such solutions that additionally exploit the constraints that are present in a driving context and video data to improve tracking accuracy and computation time. Furthermore, we conduct a thorough comparative study with different camera configurations. For experimental evaluations, we collected a novel head pose data set from naturalistic on-road driving in urban streets and freeways, with particular emphasis on events inducing spatially large head movements (e.g., merge and lane change). Our analyses show promising results.


Computer Vision and Image Understanding | 2015

On surveillance for safety critical events

Eshed Ohn-Bar; Ashish Tawari; Sujitha Martin; Mohan M. Trivedi

A distributed camera-sensor system for driver assistance and situational awareness.Systematic, comparative evaluation of cues for prediction of safety critical events.Real-time prediction of overtaking and braking maneuvers.Detailed temporal analysis of the utility of various cues for maneuver prediction.Early prediction (1-2s) before the maneuver is shown on real-world data. We study techniques for monitoring and understanding real-world human activities, in particular of drivers, from distributed vision sensors. Real-time and early prediction of maneuvers is emphasized, specifically overtake and brake events. Study this particular domain is motivated by the fact that early knowledge of driver behavior, in concert with the dynamics of the vehicle and surrounding agents, can help to recognize dangerous situations. Furthermore, it can assist in developing effective warning and driver assistance systems. Multiple perspectives and modalities are captured and fused in order to achieve a comprehensive representation of the scene. Temporal activities are learned from a multi-camera head pose estimation module, hand and foot tracking, ego-vehicle parameters, lane and road geometry analysis, and surround vehicle trajectories. The system is evaluated on a challenging dataset of naturalistic driving in real-world settings.


international conference on pattern recognition | 2014

Head, Eye, and Hand Patterns for Driver Activity Recognition

Eshed Ohn-Bar; Sujitha Martin; Ashish Tawari; Mohan M. Trivedi

In this paper, a multiview, multimodal vision framework is proposed in order to characterize driver activity based on head, eye, and hand cues. Leveraging the three types of cues allows for a richer description of the drivers state and for improved activity detection performance. First, regions of interest are extracted from two videos, one observing the drivers hands and one the drivers head. Next, hand location hypotheses are generated and integrated with a head pose and facial landmark module in order to classify driver activity into three states: wheel region interaction with two hands on the wheel, gear region activity, or instrument cluster region activity. The method is evaluated on a video dataset captured in on-road settings.


automotive user interfaces and interactive vehicular applications | 2012

On the design and evaluation of robust head pose for visual user interfaces: algorithms, databases, and comparisons

Sujitha Martin; Ashish Tawari; Erik Murphy-Chutorian; Shinko Y. Cheng; Mohan M. Trivedi

An important goal in automotive user interface research is to predict a users reactions and behaviors in a driving environment. The behavior of both drivers and passengers can be studied by analyzing eye gaze, head, hand, and foot movement, upper body posture, etc. In this paper, we focus on estimating head pose, which has been shown to be a good predictor of driver intent and a good proxy for gaze estimation, and provide a valuable head pose database for future comparative studies. Most existing head pose estimation algorithms are still struggling under large spatial head turns. Our method, however, relies on using facial features that are visible even during large spatial head turns to estimate head pose. The method is evaluated on the LISA-P Head Pose database, which has head pose data from on-road daytime and nighttime drivers of varying age, race, and gender; ground truth for head pose is provided using a motion capture system. In special regards to eye gaze estimation for automotive user interface study, the automatic head pose estimation technique presented in this paper can replace previous eye gaze estimation methods that rely on manual data annotation or be used in conjunction with them when necessary.


intelligent vehicles symposium | 2014

Predicting driver maneuvers by learning holistic features

Eshed Ohn-Bar; Ashish Tawari; Sujitha Martin; Mohan M. Trivedi

In this work, we propose a framework for the recognition and prediction of driver maneuvers by considering holistic cues. With an array of sensors, drivers head, hand, and foot gestures are being captured in a synchronized manner together with lane, surrounding agents, and vehicle parameters. An emphasis is put on real-time algorithms. The cues are processed and fused using a latent-dynamic discriminative framework. As a case study, driver activity recognition and prediction in overtaking situations is performed using a naturalistic, on-road dataset. A consequence of this work would be in development of more effective driver analysis and assistance systems.


Journal of Electronic Imaging | 2013

Driver hand activity analysis in naturalistic driving studies: challenges, algorithms, and experimental studies

Eshed Ohn-Bar; Sujitha Martin; Mohan M. Trivedi

Abstract. We focus on vision-based hand activity analysis in the vehicular domain. The study is motivated by the overarching goal of understanding driver behavior, in particular as it relates to attentiveness and risk. First, the unique advantages and challenges for a nonintrusive, vision-based solution are reviewed. Next, two approaches for hand activity analysis, one relying on static (appearance only) cues and another on dynamic (motion) cues, are compared. The motion-cue-based hand detection uses temporally accumulated edges in order to maintain the most reliable and relevant motion information. The accumulated image is fitted with ellipses in order to produce the location of the hands. The method is used to identify three hand activity classes: (1) two hands on the wheel, (2) hand on the instrument panel, (3) hand on the gear shift. The static-cue-based method extracts features in each frame in order to learn a hand presence model for each of the three regions. A second-stage classifier (linear support vector machine) produces the final activity classification. Experimental evaluation with different users and environmental variations under real-world driving shows the promise of applying the proposed systems for both postanalysis of captured driving data as well as for real-time driver assistance.


international conference on intelligent transportation systems | 2014

Attention Estimation by Simultaneous Analysis of Viewer and View

Ashish Tawari; Andreas Møgelmose; Sujitha Martin; Thomas B. Moeslund; Mohan M. Trivedi

This paper introduces a system for estimating the attention of a driver wearing a first person view camera using salient objects to improve gaze estimation. A challenging data set of pedestrians crossing intersections has been captured using Google Glass worn by a driver. A challenge unique to first person view from cars is that the interior of the car can take up a large part of the image. The proposed system automatically filters out the dashboard of the car, along with other parts of the instrumentation. The remaining area is used as a region of interest for a pedestrian detector. Two cameras looking at the driver are used to determine the direction of the drivers gaze, by examining the eye corners and the center of the iris. This coarse gaze estimation is then linked to the detected pedestrians to determine which pedestrian the driver is focused on at any given time.


international conference on intelligent transportation systems | 2013

Towards automated drive analysis: A multimodal synergistic approach

Ravi Kumar Satzoda; Sujitha Martin; Minh Van Ly; Pujitha Gunaratne; Mohan M. Trivedi

Naturalistic driving studies (NDS) capture huge amounts of drive data, that is analyzed for critical information about driver behavior, driving characteristics etc. Moreover, NDS involve data collected from a wide range of sensing technologies in cars and this makes the analysis of this data a challenging task. In this paper, we propose a multimodal synergistic approach for automated drive analysis process that can be employed in analyzing large amounts of drive data. The visual information from cameras, vehicle dynamics from CAN bus, vehicle global positioning coordinates from GPS and digital road map data, that are collected during the drive, are analyzed in a collaborative and complementary manner in the approach presented in this paper. It will be shown that the proposed synergistic drive analysis approach automatically determines a wide range of critical information about the drive in varying road conditions.


computer vision and pattern recognition | 2014

Vision on Wheels: Looking at Driver, Vehicle, and Surround for On-Road Maneuver Analysis

Eshed Ohn-Bar; Ashish Tawari; Sujitha Martin; Mohan M. Trivedi

Automotive systems provide a unique opportunity for mobile vision technologies to improve road safety by understanding and monitoring the driver. In this work, we propose a real-time framework for early detection of driver maneuvers. The implications of this study would allow for better behavior prediction, and therefore the development of more efficient advanced driver assistance and warning systems. Cues are extracted from an array of sensors observing the driver (head, hand, and foot), the environment (lane and surrounding vehicles), and the ego-vehicle state (speed, steering angle, etc.). Evaluation is performed on a real-world dataset with overtaking maneuvers, showing promising results. In order to gain better insight into the processes that characterize driver behavior, temporally discriminative cues are studied and visualized.

Collaboration


Dive into the Sujitha Martin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ashish Tawari

University of California

View shared research outputs
Top Co-Authors

Avatar

Eshed Ohn-Bar

University of California

View shared research outputs
Top Co-Authors

Avatar

Kevan Yuen

University of California

View shared research outputs
Top Co-Authors

Avatar

Akshay Rangesh

University of California

View shared research outputs
Top Co-Authors

Avatar

Cuong Tran

University of California

View shared research outputs
Top Co-Authors

Avatar

Minh Van Ly

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Borhan Vasli

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge