Akshay Rangesh
University of California, San Diego
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Akshay Rangesh.
ieee intelligent vehicles symposium | 2017
Sourabh Vora; Akshay Rangesh; Mohan M. Trivedi
The knowledge of driver distraction will be important for self driving cars in the near future to determine the handoff time to the driver. Drivers gaze direction has been previously shown as an important cue in understanding distraction. While there has been a significant improvement in personalized driver gaze zone estimation systems, a generalized gaze zone estimation system which is invariant to different subjects, perspective and scale is still lagging behind. We take a step towards the generalized system using a Convolutional Neural Network (CNN). For evaluating our system, we collect large naturalistic driving data of 11 drives, driven by 10 subjects in two different cars and label gaze zones for 47515 frames. We train our CNN on 7 subjects and test on the other 3 subjects. Our best performing model achieves an accuracy of 93.36% showing good generalization capability.
computer vision and pattern recognition | 2016
Akshay Rangesh; Eshed Ohn-Bar; Mohan M. Trivedi
This work presents an occlusion aware hand tracker to reliably track both hands of a person using a monocular RGB camera. To demonstrate its robustness, we evaluate the tracker on a challenging, occlusion-ridden naturalistic driving dataset, where hand motions of a driver are to be captured reliably. The proposed framework additionally encodes and learns tracklets corresponding to complex (yet frequently occurring) hand interactions offline, and makes an informed choice during data association. This provides positional information of the left and right hands with no intrusion (through complete or partial occlusions) over long, unconstrained video sequences in an online manner. The tracks thus obtained may find use in domains such as human activity analysis, gesture recognition, and higher-level semantic categorization.
international conference on intelligent transportation systems | 2016
Nachiket Deo; Akshay Rangesh; Mohan M. Trivedi
In this work we explore Hidden Markov models as an approach for modeling and recognizing dynamic hand gestures for the interface of in-vehicle infotainment systems. We train the HMMs on more complex shape descriptors such as HOG and CNN features, unlike typical HMM based approaches. An analysis of the optimal hyperparameters of the HMM for the task has been carried out. Also, dimensionality reduction and data augmentation have been explored as methods for reducing overfitting of the HMMs. Finally we experiment with the CNN-HMM hybrid framework which uses a trained Convolutional Neural Network for estimating the emission probabilities of the HMM. We obtain a mean recognition accuracy of 57.50% on the VIVA hand gesture challenge, which while not the best result on the dataset, shows the feasibility of the approach.
IEEE Transactions on Intelligent Transportation Systems | 2016
Akshay Rangesh; Eshed Ohn-Bar; Mohan M. Trivedi
Hands are a very important cue for understanding and analyzing driver activity and human activity, in general. Vision-based hand detection and tracking involve major challenges, such as attaining robustness to inconsistencies in lighting and scale, background clutter, object occlusion/disappearance and the large variability in hand shape, size, color, and structure. In this paper, we introduce a novel framework suitable for tracking multiple hands online. Assigning tracks to these detections is modeled as a bipartite matching problem with an objective of minimizing the total cost. Both motion and appearance cues are integrated in order to gain robustness to occlusion, fast movement, and interacting hands. Additionally, we study the utility of a left versus right hand classifier to disambiguate hand tracks and reduce ID switches. The proposed tracker shows promise on an extensive, naturalistic, and publicly available driving (VIVA Challenge) data set, by tracking both hands of the driver and the passenger effectively.
international conference on intelligent transportation systems | 2016
Akshay Rangesh; Eshed Ohn-Bar; Kevan Yuen; Mohan M. Trivedi
Over the last decade, there have been many studies that focus on modeling driver behavior, and in particular detecting and overcoming driver distraction in an effort to reduce accidents caused by driver negligence. Such studies assume that the entire onus of avoiding accidents are on the driver alone. In this study, we adopt a different stance and study the behavior of pedestrians instead. In particular, we focus on detecting pedestrians who are engaged in secondary activities involving their cellphones and similar hand-held multimedia devices from a purely vision-based standpoint. To achieve this objective, we propose a pipeline incorporating articulated human pose estimation, and the use gradient based image features to detect the presence/absence of a device in either hand of a pedestrian. Information from different streams and their dependencies on one another are encoded by a belief network. This network is then used to predict a probability score suggesting the involvement of a subject with his/her device.
ieee intelligent vehicles symposium | 2016
Sujitha Martin; Akshay Rangesh; Eshed Ohn-Bar; Mohan M. Trivedi
In this paper, we study the complex coordination of head, eyes and hands as the driver approaches a stop-controlled intersection. The proposed framework is made up of three major parts. The first part is the naturalistic driving dataset collection: synchronized capture of sensors looking-in and looking-out, multiple drivers driving in urban environment, and segmenting events at stop-controlled intersections. The second part is extracting reliable features from purely vision sensors looking in at the driver: eye movements, head pose and hand location respective to the wheel. The third part is in the design of appropriate temporal features for capturing coordination. A random forest algorithm is employed for studying relevance and understanding the temporal evolution of head, eye, and hand cues. Using 24 different events (from 5 drivers resulting in ~ 12200 frames analyzed) of three different maneuvers at stop-controlled intersections, we found that preparatory motions range in the order of a few seconds to a few milliseconds, depending on the modality (i.e. eyes, head, hand), before the event occurs.
international conference on pattern recognition | 2016
Sujitha Martin; Akshay Rangesh; Eshed Ohn-Bar; Mohan M. Trivedi
Drivers use some combination of head, eye and hand movements to perform varying number of tasks from driving related to non-driving secondary tasks. Furthermore, the combinations may vary depending on the task performed. It is important to model and understand these variations in order to build predictive systems, explore driving styles, detect activities, etc. This study, therefore, introduces a framework to model the spatio-temporal movements of head, eyes and hands given naturalistic driving data of looking-in at the driver for any events or tasks performed of interest. As a use case, we explore the temporal coordination of the modalities on data of drivers executing maneuvers at stop-controlled intersections; the maneuvers executed are go straight, turn left and turn right. In sequentially increasing time windows, by training classifiers which have the ability to provide discriminative quality of its input variable, the experimental study at intersections shows which type of, when and how long distinguishable preparatory movements occur in the range of a few milliseconds to a few seconds.
arXiv: Computer Vision and Pattern Recognition | 2018
Akshay Rangesh; Mohan M. Trivedi
arXiv: Computer Vision and Pattern Recognition | 2018
Akshay Rangesh; Mohan M. Trivedi
IEEE Transactions on Intelligent Vehicles | 2018
Nachiket Deo; Akshay Rangesh; Mohan M. Trivedi