Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Amardeep Sathyanarayana is active.

Publication


Featured researches published by Amardeep Sathyanarayana.


international conference on vehicular electronics and safety | 2008

Driver behavior analysis and route recognition by Hidden Markov Models

Amardeep Sathyanarayana; Pinar Boyraz; John H. L. Hansen

In this investigation, driver behavior signals are modeled using Hidden Markov Models (HMM) in two different and complementary approaches. The first approach considers isolated maneuver recognition with model concatenation to construct a generic route (bottom-to-top), whereas the second approach models the entire route as a dasiaphrasepsila and refines the HMM to discover maneuvers and parses the route using finer discovered maneuvers (top-to-bottom). By applying these two approaches, a hierarchical framework to model driver behavior signals is proposed. It is believed that using the proposed approach, driver identification and distraction detection problems can be addressed in a more systematic and mathematically sound manner. We believe that this framework and the initial results will encourage more investigations into driver behavior signal analysis and related safety systems employing a partitioned sub-module strategy.


international conference on intelligent transportation systems | 2012

Leveraging sensor information from portable devices towards automatic driving maneuver recognition

Amardeep Sathyanarayana; Seyed Omid Sadjadi; John H. L. Hansen

With the proliferation of smart portable devices, more people have started using them within the vehicular environment while driving. Although these smart devices provide a variety of useful information, using them while driving significantly affects the drivers attention towards the road. This can in turn cause driver distraction and lead to increased risk of crashes. On the positive side, these devices are equipped with powerful sensors which can be effectively utilized towards driver behavior analysis and safety. This study evaluates the effectiveness of portable sensor information in driver assistance systems. Available signals from the CAN-bus are compared with those extracted from an off-the-shelf portable device for recognizing patterns in driving sub-tasks and maneuvers. Through our analysis, a qualitative feature set is identified with which portable devices could be employed to prune the search space in recognizing driving maneuvers and possible instances of driver distraction. An absolute improvement of 15% is achieved with portable sensor information compared to CAN-bus signals, which motivates further study of portable devices to build driver behavior models for driver assistance systems.


international conference on vehicular electronics and safety | 2008

Body sensor networks for driver distraction identification

Amardeep Sathyanarayana; Sandhya Nageswaren; Hassan Ghasemzadeh; Roozbeh Jafari; John H. L. Hansen

Cars have become a part of almost everyonepsilas life taking people from one place to another. In such a fast paced mode of transport, there are a variety of ways in which drivers can get distracted while driving. Getting stuck in a traffic jam, doing other tasks simultaneously while driving- for example drinking, reading, talking over the mobile phone are various forms of distractions. Early detection of driver distraction can reduce the number of accidents. This paper describes the initial analysis of a system for detecting driver distractions using data from the controller area network (CAN) and motion sensor (accelerometer and gyroscope). The paper mainly focuses on distractions perceivable with leg and head movements of the driver. The data from these expressive parts of the driver yield a high accuracy of distraction detection of over 90%. With such high accuracies, reliable systems could be built to have early warning or corrective mechanisms which would avoid or reduce the intensity of accidents caused due to driver distractions.


ieee intelligent vehicles symposium | 2007

UTDrive: Driver Behavior and Speech Interactive Systems for In-Vehicle Environments

Pongtep Angkititrakul; Matteo Petracca; Amardeep Sathyanarayana; John H. L. Hansen

This paper describes an overview of the UTDrive project. UTDrive is part of an on-going international collaboration to collect and research rich multi-modal data recorded for modeling driver behavior for in-vehicle environments. The objective of the UTDrive project is to analyze behavior while the driver is interacting with speech-activated systems or performing common secondary tasks, as well as to better understand speech characteristics of the driver undergoing additional cognitive load. The corpus consists of audio, video, gas/brake pedal pressure, forward distance, GPS information, and CAN-Bus information. The resulting corpus, analysis, and modeling will contribute to more effective speech interactive systems with are less distractive and adjustable to the drivers cognitive capacity and driving situations.


ieee intelligent vehicles symposium | 2010

Driver adaptive and context aware active safety systems using CAN-bus signals

Amardeep Sathyanarayana; Pinar Boyraz; Zelam Purohit; Rosarita Lubag; John H. L. Hansen

Increasing stress levels in drivers, along with their ability to multi task with infotainment systems cause the drivers to deviate their attention from the primary task of driving. With the rapid advancements in technology, along with the development of infotainment systems, much emphasis is being given to occupant safety. Modern vehicles are equipped with many sensors and ECUs (Embedded Control Units) and CAN-bus (Controller Area Network) plays a significant role in handling the entire communication between the sensors, ECUs and actuators. Most of the mechanical links are replaced by intelligent processing units (ECU) which take in signals from the sensors and provide measurements for proper functioning of engine and vehicle functionalities along with several active safety systems such as ABS (Anti-lock Brake System) and ESP (Electronic Stability program). Current active safety systems utilize the vehicle dynamics (using signals on CAN-bus) but are unaware of context and driver status, and do not adapt to the changing mental and physical conditions of the driver. The traditional engine and active safety systems use a very small time window (t<2sec) of the CAN-bus to operate. On the contrary, the implementation of driver adaptive and context aware systems require longer time windows and different methods for analysis. The long-term history and trends in the CAN-bus signals contain important information on driving patterns and driver characteristics. In this paper, a summary of systems that can be built on this type of analysis is presented. The CAN-bus signals are acquired and analyzed to recognize driving sub-tasks, maneuvers and routes. Driver inattention is assessed and an overall system which acquires, analyses and warns the driver in real-time while the driver is driving the car is presented showing that an optimal human-machine cooperative system can be designed to achieve improved overall safety.


international conference on acoustics, speech, and signal processing | 2013

Overlapped-speech detection with applications to driver assessment for in-vehicle active safety systems

Navid Shokouhi; Amardeep Sathyanarayana; Seyed Omid Sadjadi; John H. L. Hansen

In this study we propose a system for overlapped-speech detection. Spectral harmonicity and envelope features are extracted to represent overlapped and single-speaker speech using Gaussian mixture models (GMM). The system is shown to effectively discriminate the single and overlapped speech classes. We further increase the discrimination by proposing a phoneme selection scheme to generate more reliable artificial overlapped data for model training. Evaluations on artificially generated co-channel data show that the novelty in feature selection and phoneme omission results in a relative improvement of 10% in the detection accuracy compared to baseline. As an example application, we evaluate the effectiveness of overlapped-speech detection for vehicular environments and its potential in assessing driver alertness. Results indicate a good correlation between driver performance and the amount and location of overlapped-speech segments.


ieee intelligent vehicles symposium | 2013

Belt Up: Investigating the impact of in-vehicular conversation on driving performance

Amardeep Sathyanarayana; Navid Shokouhi; Seyed Omid Sadjadi; John H. L. Hansen

In-vehicle conversations are typical when there is more than one person in the car. Although many conversations are beneficial in keeping the driver alert and active, there are also instances where a competitive conversation may adversely influence driving performance. Identifying such scenarios can improve vehicle safety systems by fusing the knowledge obtained from conversational speech analysis and vehicle dynamic signals. In this study we incorporate the use of smart portable devices to create a unified platform for recording in-vehicular conversations as well as the vehicle dynamic signals required to evaluate the driving performance. Results show that turn taking rate and overlapping speech segments under certain conditions correlate with deviations from normal driving patterns. The conversational speech analysis can thus be utilized as a component in driver assistance systems such that the impact of in-vehicle speech activity on driving performance is controlled or minimized.


IEEE Signal Processing Magazine | 2017

Driver Modeling for Detection and Assessment of Driver Distraction: Examples from the UTDrive Test Bed

John H. L. Hansen; Carlos Busso; Yang Zheng; Amardeep Sathyanarayana

Vehicle technologies have advanced significantly over the past 20 years, especially with respect to novel in-vehicle systems for route navigation, information access, infotainment, and connected vehicle advancements for vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) connectivity and communications. While there is great interest in migrating to fully automated, self-driving vehicles, factors such as technology performance, cost barriers, public safety, insurance issues, legal implications, and government regulations suggest it is more likely that the first step in the progression will be multifunctional vehicles. Today, embedded controllers as well as a variety of sensors and high-performance computing in present-day cars allow for a smooth transition from complete human control toward semisupervised or assisted control, then to fully automated vehicles. Next-generation vehicles will need to be more active in assessing driver awareness, vehicle capabilities, and traffic and environmental settings, plus how these factors come together to determine a collaborative safe and effective driver-vehicle engagement for vehicle operation. This article reviews a range of issues pertaining to driver modeling for the detection and assessment of distraction. Examples from the UTDrive project are used whenever possible, along with a comparison to existing research programs. The areas addressed include 1) understanding driver behavior and distraction, 2) maneuver recognition and distraction analysis, 3) glance behavior and visual tracking, and 4) mobile platform advancements for in-vehicle data collection and human-machine interface. This article highlights challenges in achieving effective modeling, detection, and assessment of driver distraction using both UTDrive instrumented vehicle data and naturalistic driving data


international conference on intelligent transportation systems | 2014

Threshold based decision-tree for automatic driving maneuver recognition using CAN-Bus signal

Yang Zheng; Amardeep Sathyanarayana; John H. L. Hansen

There is a growing need to develop automatic signal processing strategies for analysis, labeling, and modeling of massive naturalistic driving data. In order to contribute to the formulation of human centric driving assistive / active safety systems, it is also imperative to understand the human component along with vehicle dynamics. Using massive corpora of on-road real-traffic naturalistic driving data exported from CAN-Bus signals, it is possible to develop maneuver recognition for continuous driving performance evaluation. Although high accuracy of automatic maneuver recognition could be obtained using statistic methods such as Hidden Markov Models and/or Bayesian Statistics, this comes at great costs in computational complexity which can jeopardize the viability of applying such driving assistance in real-time embedded systems. To save computational resources, this study proposes to set up an easier threshold based decision-tree strategy. The method is based on filterbank analysis of time-frequency spectrogram processing of steering wheel angle and vehicle speed signals. It is shown that threshold selection is critical to ensure that specific maneuvers with higher levels of danger receive higher recognition accuracy. Analysis of the proposed maneuver recognition system is performed using data from the UTDrive vehicle corpus, with a 6-way decision-tree maneuver performance ranging from 54-73% depending on the specific maneuver class. It is suggested that lower performing specific maneuvers could subsequently be enhanced with statistical model based methods afterwards for both embedded in-vehicle systems and tagging of massive data sets.


5th Biennial Workshop on Digital Signal Processing for In-Vehicle Systems, DSP 2011 | 2014

Effects of Multitasking on Drivability Through CAN-Bus Analysis

Amardeep Sathyanarayana; Pinar Boyraz; John H. L. Hansen

Humans try their best to maximize their abilities to handle various kinds of tasks, be it physical, auditory, visual, or cognitive. The same is true when a person is driving a vehicle—while driving is the primary task of a driver, he/she will attempt to accomplish secondary tasks such as speaking over a cell phone, checking and creating text messages, and selecting music or viewing/accessing news. Though the driver’s primary intention is a safe drive, as previous studies have shown (Wilde, Target risk: dealing with the danger of death, disease and damage in everyday decisions, 1994), drivers elevate their risk-taking ability to an optimal level. While performing various tasks this balance between drivability and risk taking can vary, leading to driver distraction and possible accidents. The automotive industry has taken special care to reduce the complexity of operating in-vehicle infotainment systems. Better ergonomics and haptic (tactile) systems have helped achieve comfortable usability. Advances in driver assistance systems have also resulted in increased use of audio-based feedback (Forlines et al. Comparison between spoken queries and menu-based interfaces for in-car digital music selection, 2005) from navigation and other systems. It is very important to understand how these secondary tasks and feedback systems affect the driver and his/her drivability . This chapter focuses on understanding how drivers react to various secondary tasks. An analysis on driving performance using vehicle dynamics and sensor information via CAN-bus shows interesting results on how performing secondary tasks affect some drivers. Previous studies (Sathyanarayana et al. Driver behavior analysis and route recognition by hidden Markov models, 2008 I.E.E.E International Conference on Vehicular Electronics and Safety, 2008) have shown how maneuvers can be segmented into preparatory, maneuver, and recovery phases. Initial results presented in this chapter show a similar trend in how drivers handle secondary tasks. Even if secondary tasks do not distract the driver, results show driver variations in anticipation or preparation for the task, performing the task itself, and post completion (recovery) of the task.

Collaboration


Dive into the Amardeep Sathyanarayana's collaboration.

Top Co-Authors

Avatar

John H. L. Hansen

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar

Navid Shokouhi

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar

Pinar Boyraz

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar

Seyed Omid Sadjadi

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar

Yang Zheng

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Carlos Busso

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar

Nitish Krishnamurthy

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar

Zelam Purohit

University of Texas at Dallas

View shared research outputs
Researchain Logo
Decentralizing Knowledge