Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tarak Gandhi is active.

Publication


Featured researches published by Tarak Gandhi.


IEEE Transactions on Intelligent Transportation Systems | 2007

Pedestrian Protection Systems: Issues, Survey, and Challenges

Tarak Gandhi; Mohan M. Trivedi

This paper describes the recent research on the enhancement of pedestrian safety to help develop a better understanding of the nature, issues, approaches, and challenges surrounding the problem. It presents a comprehensive review of research efforts underway dealing with pedestrian safety and collision avoidance. The importance of pedestrian protection is emphasized in a global context, discussing the research programs and efforts in various countries. Pedestrian safety measures, including infrastructure enhancements and passive safety features in vehicles, are described, followed by a systematic description of active safety systems based on pedestrian detection using sensors in vehicle and infrastructure. The pedestrian detection approaches are classified according to various criteria such as the type and configuration of sensors, as well as the video cues and classifiers used in detection algorithms. It is noted that collision avoidance not only requires detection of pedestrians but also requires collision prediction using pedestrian dynamics and behavior analysis. Hence, this paper includes research dealing with probabilistic modeling of pedestrian behavior for predicting collisions between pedestrians and vehicles.


IEEE Transactions on Intelligent Transportation Systems | 2007

Looking-In and Looking-Out of a Vehicle: Computer-Vision-Based Enhanced Vehicle Safety

Mohan M. Trivedi; Tarak Gandhi; Joel C. McCall

This paper presents investigations into the role of computer-vision technology in developing safer automobiles. We consider vision systems, which cannot only look out of the vehicle to detect and track roads and avoid hitting obstacles or pedestrians but simultaneously look inside the vehicle to monitor the attentiveness of the driver and even predict her intentions. In this paper, a systems-oriented framework for developing computer-vision technology for safer automobiles is presented. We will consider three main components of the system: environment, vehicle, and driver. We will discuss various issues and ideas for developing models for these main components as well as activities associated with the complex task of safe driving. This paper includes a discussion of novel sensory systems and algorithms for capturing not only the dynamic surround information of the vehicle but also the state, intent, and activity patterns of drivers


IEEE Transactions on Intelligent Transportation Systems | 2006

Vehicle Surround Capture: Survey of Techniques and a Novel Omni-Video-Based Approach for Dynamic Panoramic Surround Maps

Tarak Gandhi; Mohan M. Trivedi

Awareness of what surrounds a vehicle directly affects the safe driving and maneuvering of an automobile. This paper focuses on the capture of vehicle surroundings using video inputs. Surround information or maps can help in studies of driver behavior as well as provide critical input in the development of effective driver assistance systems. A survey of literature related to surround analysis is presented, emphasizing detecting objects such as vehicles, pedestrians, and other obstacles. Omni cameras, which give a panoramic view of the surroundings, can be useful for visualizing and analyzing the nearby surroundings of the vehicle. The concept of Dynamic Panoramic Surround (DPS) map that shows the nearby surroundings of the vehicle and detects the objects of importance on the road is introduced. A novel approach for synthesizing the DPS using stereo and motion analysis of video images from a pair of omni cameras on the vehicle is developed. Successful generation of the DPS in experimental runs on an instrumented vehicle test bed is demonstrated. These experiments prove the basic feasibility and show promise of omni-camera-based DPS capture algorithm to provide useful semantic descriptors of the state of moving vehicles and obstacles in the vicinity of a vehicle


international conference on intelligent transportation systems | 2006

Pedestrian collision avoidance systems: a survey of computer vision based recent studies

Tarak Gandhi; Mohan M. Trivedi

This paper gives a survey of recent research on pedestrian collision avoidance systems. Collision avoidance not only requires detection of pedestrians, but also collision prediction using pedestrian dynamics and behavior analysis. The paper reviews various approaches based on cues such as shape, motion, and stereo used for detecting pedestrians from visible as well as non-visible light sensors. This is followed by the study of research dealing with probabilistic modeling of pedestrian behavior for predicting collisions between pedestrian and vehicle. The literature review is also condensed in tabular form for quick reference


IEEE Transactions on Aerospace and Electronic Systems | 2003

Detection of obstacles in the flight path of an aircraft

Tarak Gandhi; Mau-Tsuen Yang; Rangachar Kasturi; Octavia I. Camps; Lee D. Coraor; Jeffrey W. McCandless

The National Aeronautics and Space Administration (NASA), along with members of the aircraft industry, recently developed technologies for a new supersonic aircraft. One of the technological areas considered for this aircraft is the use of video cameras and image-processing equipment to aid the pilot in detecting other aircraft in the sky. The detection techniques should provide high detection probability for obstacles that can vary from subpixel to a few pixels in size, while maintaining a low false alarm probability in the presence of noise and severe background clutter. Furthermore, the detection algorithms must be able to report such obstacles in a timely fashion, imposing severe constraints on their execution time. Approaches are described here to detect airborne obstacles on collision course and crossing trajectories in video images captured from an airborne aircraft. In both cases the approaches consist of an image-processing stage to identify possible obstacles followed by a tracking stage to distinguish between true obstacles and image clutter, based on their behavior. For collision course object detection, the image-processing stage uses morphological filter to remove large-sized clutter. To remove the remaining small-sized clutter, differences in the behavior of image translation and expansion of the corresponding features is used in the tracking stage. For crossing object detection, the image-processing stage uses low-stop filter and image differencing to separate stationary background clutter. The remaining clutter is removed in the tracking stage by assuming that the genuine object has a large signal strength, as well as a significant and consistent motion over a number of frames. The crossing object detection algorithm was implemented on a pipelined architecture from DataCube and runs in real time. Both algorithms have been successfully tested on flight tests conducted by NASA.


ieee intelligent vehicles symposium | 2008

Image based estimation of pedestrian orientation for improving path prediction

Tarak Gandhi; Mohan M. Trivedi

Pedestrian protection is an essential component of driver assistance systems. A pedestrian protection system should be able to predict the possibility of collision after detecting the pedestrian, and it is important to consider all the cues available in order to make that prediction. The direction in which the pedestrian is facing is one such cue that could be used in predicting where the pedestrian may move in future. This paper describes a novel approach to determine the pedestrianpsilas orientation using Support Vector Machine (SVM) based scheme. Instead of providing a hard decision, this scheme estimates the discrete probability distribution of the orientation. A Hidden Markov Model (HMM) is used to model the transitions between orientations over time and the orientation probabilities are integrated over time to get a more reliable estimate of orientation. Experiments showing the performance of estimating orientations are described to show the promise of the approach.


intelligent vehicles symposium | 2003

Driver's view and vehicle surround estimation using omnidirectional video stream

Kohsia S. Huang; Mohan M. Trivedi; Tarak Gandhi

Our research is focused on the development of novel machine vision based telematic systems, which provide non-intrusive probing of the state of the driver and driving conditions. In this paper we present a system which allows simultaneous capture of the drivers head pose, driving view, and surroundings of the vehicle. The integrated machine vision system utilizes a video stream of full 360 degree panoramic field of view. The processing modules include perspective transformation, feature extraction, head detection, head pose estimation, driving view synthesis, and motion segmentation. The paper presents a multi-state statistical decision models with Kalman filtering based tracking for head pose detection and face orientation estimation. The basic feasibility and robustness of the approach is demonstrated with a series of systematic experimental studies.


machine vision applications | 2005

Parametric ego-motion estimation for vehicle surround analysis using an omnidirectional camera

Tarak Gandhi; Mohan M. Trivedi

Abstract.Omnidirectional cameras that give a 360° panoramic view of the surroundings have recently been used in many applications such as robotics, navigation, and surveillance. This paper describes the application of parametric ego-motion estimation for vehicle detection to perform surround analysis using an automobile-mounted camera. For this purpose, the parametric planar motion model is integrated with the transformations to compensate distortion in omnidirectional images. The framework is used to detect objects with independent motion or height above the road. Camera calibration as well as the approximate vehicle speed obtained from a CAN bus are integrated with the motion information from spatial and temporal gradients using a Bayesian approach. The approach is tested for various configurations of an automobile-mounted omni camera as well as a rectilinear camera. Successful detection and tracking of moving vehicles and generation of a surround map are demonstrated for application to intelligent driver support.


international conference on pattern recognition | 1994

An automatic jigsaw puzzle solver

D.A. Kosiba; P.M. Devaux; S. Balasubramanian; Tarak Gandhi; K. Kasturi

A computer vision system to automatically analyze and assemble an image of the pieces of a jigsaw puzzle is presented. The system, called Automatic Puzzle Solver (APS), derives a new set of features based on the shape and color characteristics of the puzzle pieces. A combination of the shape dependent features and color cues is used to match the puzzle pieces. Matching is performed using a modified iterative labeling procedure in order to reconstruct the original picture represented by the jigsaw puzzle. Algorithms for obtaining shape description and matching are explained with experimental results.


machine vision applications | 2007

Person tracking and reidentification: Introducing Panoramic Appearance Map (PAM) for feature representation

Tarak Gandhi; Mohan M. Trivedi

This paper develops a concept of Panoramic Appearance Map (PAM) for performing person reidentification in a multi-camera setup. Each person is tracked in multiple cameras and the position on the floor plan is determined using triangulation. Using the geometry of the cameras and the person location, a panoramic map centered at the person’s location is created with the horizontal axis representing the azimuth angle and vertical axis representing the height. Each pixel in the map image gets color information from the cameras which can observe it. The maps between different tracks are compared using a distance measure based on weighted SSD in order to select the best match. Temporalintegration by registering multiple maps over the tracking period improves the matching performance. Experimental results of matching persons between two camera sets show the effectiveness of the approach.

Collaboration


Dive into the Tarak Gandhi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rangachar Kasturi

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Octavia I. Camps

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Lee D. Coraor

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Mau-Tsuen Yang

National Dong Hwa University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joel C. McCall

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Remy Chang

University of California

View shared research outputs
Top Co-Authors

Avatar

Sadashiva Devadiga

Pennsylvania State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge