Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Amit A. Kale is active.

Publication


Featured researches published by Amit A. Kale.


IEEE Transactions on Image Processing | 2004

Identification of humans using gait

Amit A. Kale; Aravind Sundaresan; A. N. Rajagopalan; Naresh P. Cuntoor; Amit K. Roy-Chowdhury; Volker Krüger; Rama Chellappa

We propose a view-based approach to recognize humans from their gait. Two different image features have been considered: the width of the outer contour of the binarized silhouette of the walking person and the entire binary silhouette itself. To obtain the observation vector from the image features, we employ two different methods. In the first method, referred to as the indirect approach, the high-dimensional image feature is transformed to a lower dimensional space by generating what we call the frame to exemplar (FED) distance. The FED vector captures both structural and dynamic traits of each individual. For compact and effective gait representation and recognition, the gait information in the FED vector sequences is captured in a hidden Markov model (HMM). In the second method, referred to as the direct approach, we work with the feature vector directly (as opposed to computing the FED) and train an HMM. We estimate the HMM parameters (specifically the observation probability B) based on the distance between the exemplars and the image features. In this way, we avoid learning high-dimensional probability density functions. The statistical nature of the HMM lends overall robustness to representation and recognition. The performance of the methods is illustrated using several databases.


Lecture Notes in Computer Science | 2003

Gait analysis for human identification

Amit A. Kale; Naresh P. Cuntoor; B. Yegnanarayana; A. N. Rajagopalan; Rama Chellappa

Human gait is an attractive modality for recognizing people at a distance. In this paper we adopt an appearance-based approach to the problem of gait recognition. The width of the outer contour of the binarized silhouette of a walking person is chosen as the basic image feature. Different gait features are extracted from the width vector such as the dowsampled, smoothed width vectors, the velocity profile etc. and sequences of such temporally ordered feature vectors are used for representing a persons gait. We use the dynamic time-warping (DTW) approach for matching so that non-linear time normalization may be used to deal with the naturally-occuring changes in walking speed. The performance of the proposed method is tested using different gait databases.


advanced video and signal based surveillance | 2003

Towards a view invariant gait recognition algorithm

Amit A. Kale; Amit K. Roy Chowdhury; Rama Chellappa

Human gait is a spatio-temporal phenomenon and typifies the motion characteristics of an individual. The gait of a person is easily recognizable when extracted from a side-view of the person. Accordingly, gait-recognition algorithms work best when presented with images where the person walks parallel to the camera image plane. However, it is not realistic to expect this assumption to be valid in most real-life scenarios. Hence, it is important to develop methods whereby the side-view can be generated from any other arbitrary view in a simple, yet accurate, manner. This is the main theme of the paper. We show that if the person is far enough from the camera, it is possible to synthesize a side view (referred to as canonical view) from any other arbitrary view using a single camera. Two methods are proposed for doing this: (i) using the perspective projection model; (ii) using the optical flow based structure from motion equations. A simple camera calibration scheme for this method is also proposed. Examples of synthesized views are presented. Preliminary testing with gait recognition algorithms gives encouraging results. A by-product of this method is a simple algorithm for synthesizing novel views of a planar scene.


ieee international conference on automatic face and gesture recognition | 2002

Gait-based recognition of humans using continuous HMMs

Amit A. Kale; A. N. Rajagopalan; N. Cuntoor; Volker Krüger

Gait is a spatio-temporal phenomenon that typifies the motion characteristics of an individual. In this paper, we propose a view-based approach to recognize humans through gait. The width of the outer contour of the binarized silhouette of a walking person is chosen as the image feature. A set of stances or key frames that occur during the walk cycle of an individual is chosen. Euclidean distances of a given image from this stance set are computed and a lower-dimensional observation vector is generated. A continuous hidden Markov model (HMM) is trained using several such lower-dimensional vector sequences extracted from the video. This methodology serves to compactly capture structural and transitional features that are unique to an individual. The statistical nature of the HMM renders overall robustness to gait representation and recognition. The human identification performance of the proposed scheme is found to be quite good when tested in natural walking conditions.


international conference on acoustics, speech, and signal processing | 2004

Fusion of gait and face for human identification

Amit A. Kale; Amit K. Roy-Chowdhury; Rama Chellappa

Identification of humans from arbitrary view points is an important requirement for different tasks including perceptual interfaces for intelligent environments, covert security and access control etc. For optimal performance, the system must use as many cues as possible and combine them in meaningful ways. In this paper, we discuss fusion of face and gait cues for the single camera case. We present a view invariant gait recognition algorithm for gait recognition. We employ decision fusion to combine the results of our gait recognition algorithm and a face recognition algorithm based on sequential importance sampling. We consider two fusion scenarios: hierarchical and holistic. The first involves using the gait recognition algorithm as a filter to pass on a smaller set of candidates to the face recognition algorithm. The second involves combining the similarity scores obtained individually from the face and gait recognition algorithms. Simple rules like the SUM, MIN and PRODUCT are used for combining the scores. The results of fusion experiments are demonstrated on the NIST database which has outdoor gait and face data of 30 subjects.


international conference on acoustics, speech, and signal processing | 2003

Combining multiple evidences for gait recognition

Naresh P. Cuntoor; Amit A. Kale; Rama Chellappa

In this paper, we systematically analyze different components of human gait, for the purpose of human identification. We investigate dynamic features such as the swing of the hands/legs, the sway of the upper body and static features like height, in both frontal and side views. Both probabilistic and non-probabilistic techniques are used for matching the features. Various combination strategies may be used depending upon the gait features being combined. We discuss three simple rules: the Sum, Product and MIN rules that are relevant to our feature sets. Experiments using four different datasets demonstrate that fusion can be used as an effective strategy in recognition.


computer vision and pattern recognition | 2008

Towards fast, view-invariant human action recognition

Srikanth Cherla; Kaustubh Kulkarni; Amit A. Kale; V. Ramasubramanian

In this paper, we propose a fast method to recognize human actions which accounts for intra-class variability in the way an action is performed. We propose the use of a low dimensional feature vector which consists of (a) the projections of the width profile of the actor on to an ldquoaction basisrdquo and (b) simple spatio-temporal features. The action basis is built using eigenanalysis of walking sequences of different people. Given the limited amount of training data, Dynamic Time Warping (DTW) is used to perform recognition. We propose the use of the average-template with multiple features, first used in speech recognition, to better capture the intra-class variations for each action. We demonstrate the efficacy of this algorithm using our low dimensional feature to robustly recognize human actions. Furthermore, we show that view-invariant recognition can be performed by using a simple data fusion of two orthogonal views. For the actions that are still confusable, a temporal discriminative weighting scheme is used to distinguish between them. The effectiveness of our method is demonstrated by conducting experiments on the multi-view IXMAS dataset of persons performing various actions.


international conference on acoustics, speech, and signal processing | 2002

A framework for activity-specific human identification

Amit A. Kale; Naresh P. Cuntoor; Rama Chellappa

In this paper we propose a view based approach to recognize humans when engaged in some activity. The width of the outer contour of the binarized silhouette of a walking person is chosen as the image feature. A set of exemplars that occur during an activity cycle is chosen for each individual. Using these exemplars a lower dimensional Frame to Exemplar Distance (FED) vector is generated. A continuous HMM is trained using several such FED vector sequences. This methodology serves to compactly capture structural and dynamic features that are unique to an individual. The statistical nature of the HMM renders overall robustness to representation and recognition. Human identification performance of the proposed scheme is found to be quite good when tested on outdoor video sequences collected using surveillance cameras.


ieee international conference on high performance computing, data, and analytics | 2009

Towards a robust, real-time face processing system using CUDA-enabled GPUs

Bharatkumar Sharma; Rahul Thota; Nagavijayalakshmi Vydyanathan; Amit A. Kale

Processing of human faces finds application in various domains like law enforcement and surveillance, entertainment (interactive video games), information security, smart cards etc. Several of these applications are interactive and require reliable and fast face processing. A generic face processing system may comprise of face detection, recognition, tracking and rendering. In this paper, we develop a GPU accelerated real-time and robust face processing system that does face detection and tracking. Face detection is done by adapting the Viola and Jones algorithm that is based on the Adaboost learning system. For robust tracking of faces across real-life illumination conditions, we leverage the algorithm proposed by Thota and others, that combines the strengths of Adaboost and an image based parametric illumination model. We design and develop optimized parallel implementations of these algorithms on graphics processors using the Compute Unified Device Architecture (CUDA), a C-based programming model from NVIDIA. We evaluate our face processing system using both static image databases as well as using live frames captured from a firewire camera under realistic conditions. Our experimental results indicate that our parallel face detector and tracker achieve much greater detection speeds as compared to existing work, while maintaining accuracy. We also demonstrate that our tracking system is robust to extreme illumination conditions.


IEEE Transactions on Image Processing | 2012

Particle Filter With a Mode Tracker for Visual Tracking Across Illumination Changes

Samarjit Das; Amit A. Kale; Namrata Vaswani

In this correspondence, our goal is to develop a visual tracking algorithm that is able to track moving objects in the presence of illumination variations in the scene and that is robust to occlusions. We treat the illumination and motion (x-y translation and scale) parameters as the unknown “state” sequence. The observation is the entire image, and the observation model allows for occasional occlusions (modeled as outliers). The nonlinearity and multimodality of the observation model necessitate the use of a particle filter (PF). Due to the inclusion of illumination parameters, the state dimension increases, thus making regular PFs impractically expensive. We show that the recently proposed approach using a PF with a mode tracker can be used here since, even in most occlusion cases, the posterior of illumination conditioned on motion and the previous state is unimodal and quite narrow. The key idea is to importance sample on the motion states while approximating importance sampling by posterior mode tracking for estimating illumination. Experiments demonstrate the advantage of the proposed algorithm over existing PF-based approaches for various face and vehicle tracking. We are also able to detect illumination model changes, e.g., those due to transition from shadow to sunlight or vice versa by using the generalized expected log-likelihood statistics and successfully compensate for it without ever loosing track.

Collaboration


Dive into the Amit A. Kale's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

A. N. Rajagopalan

Indian Institute of Technology Madras

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge