A. N. Rajagopalan
Indian Institute of Technology Madras
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by A. N. Rajagopalan.
IEEE Transactions on Image Processing | 2004
Amit A. Kale; Aravind Sundaresan; A. N. Rajagopalan; Naresh P. Cuntoor; Amit K. Roy-Chowdhury; Volker Krüger; Rama Chellappa
We propose a view-based approach to recognize humans from their gait. Two different image features have been considered: the width of the outer contour of the binarized silhouette of the walking person and the entire binary silhouette itself. To obtain the observation vector from the image features, we employ two different methods. In the first method, referred to as the indirect approach, the high-dimensional image feature is transformed to a lower dimensional space by generating what we call the frame to exemplar (FED) distance. The FED vector captures both structural and dynamic traits of each individual. For compact and effective gait representation and recognition, the gait information in the FED vector sequences is captured in a hidden Markov model (HMM). In the second method, referred to as the direct approach, we work with the feature vector directly (as opposed to computing the FED) and train an HMM. We estimate the HMM parameters (specifically the observation probability B) based on the distance between the exemplars and the image features. In this way, we avoid learning high-dimensional probability density functions. The statistical nature of the HMM lends overall robustness to representation and recognition. The performance of the methods is illustrated using several databases.
Lecture Notes in Computer Science | 2003
Amit A. Kale; Naresh P. Cuntoor; B. Yegnanarayana; A. N. Rajagopalan; Rama Chellappa
Human gait is an attractive modality for recognizing people at a distance. In this paper we adopt an appearance-based approach to the problem of gait recognition. The width of the outer contour of the binarized silhouette of a walking person is chosen as the basic image feature. Different gait features are extracted from the width vector such as the dowsampled, smoothed width vectors, the velocity profile etc. and sequences of such temporally ordered feature vectors are used for representing a persons gait. We use the dynamic time-warping (DTW) approach for matching so that non-linear time normalization may be used to deal with the naturally-occuring changes in walking speed. The performance of the proposed method is tested using different gait databases.
ieee international conference on automatic face and gesture recognition | 2002
Amit A. Kale; A. N. Rajagopalan; N. Cuntoor; Volker Krüger
Gait is a spatio-temporal phenomenon that typifies the motion characteristics of an individual. In this paper, we propose a view-based approach to recognize humans through gait. The width of the outer contour of the binarized silhouette of a walking person is chosen as the image feature. A set of stances or key frames that occur during the walk cycle of an individual is chosen. Euclidean distances of a given image from this stance set are computed and a lower-dimensional observation vector is generated. A continuous hidden Markov model (HMM) is trained using several such lower-dimensional vector sequences extracted from the video. This methodology serves to compactly capture structural and transitional features that are unique to an individual. The statistical nature of the HMM renders overall robustness to gait representation and recognition. The human identification performance of the proposed scheme is found to be quite good when tested in natural walking conditions.
Pattern Recognition Letters | 2007
A. Piyush Shanker; A. N. Rajagopalan
In this paper, we propose a signature verification system based on Dynamic Time Warping (DTW). The method works by extracting the vertical projection feature from signature images and by comparing reference and probe feature templates using elastic matching. Modifications are made to the basic DTW algorithm to account for the stability of the various components of a signature. The basic DTW and the modified DTW methods are tested on a signature database of 100 people. The modified DTW algorithm, which incorporates stability, has an equal-error-rate of only 2% in comparison to 29% for the basic DTW method.
IEEE Transactions on Multimedia | 2007
Ayan Chakrabarti; A. N. Rajagopalan; Rama Chellappa
We present a learning-based method to super-resolve face images using a kernel principal component analysis-based prior model. A prior probability is formulated based on the energy lying outside the span of principal components identified in a higher-dimensional feature space. This is used to regularize the reconstruction of the high-resolution image. We demonstrate with experiments that including higher-order correlations results in significant improvements
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2004
A. N. Rajagopalan; Subhasis Chaudhuri; Uma Mudenagudi
We propose a method for estimating depth from images captured with a real aperture camera by fusing defocus and stereo cues. The idea is to use stereo-based constraints in conjunction with defocusing to obtain improved estimates of depth over those of stereo or defocus alone. The depth map as well as the original image of the scene are modeled as Markov random fields with a smoothness prior, and their estimates are obtained by minimizing a suitable energy function using simulated annealing. The main advantage of the proposed method, despite being computationally less efficient than the standard stereo or DFD method, is simultaneous recovery of depth as well as space-variant restoration of the original focused image of the scene.
international conference on computer vision | 1998
A. N. Rajagopalan; K.S. Kumar; J. Karlekar; R. Manivasakan; M.M. Patil; Uday B. Desai; P.G. Poonacha; Subhasis Chaudhuri
Two new schemes are presented for finding human faces in a photograph. The first scheme approximates the unknown distributions of the face and the face-like manifolds wing higher order statistics (HOS). An HOS-based data clustering algorithm is also proposed. In the second scheme, the face to non-face and non-face to face transitions are learnt using a hidden Markov model (HMM). The HMM parameters are estimated corresponding to a given photograph and the faces are located by examining the optimal state sequence of the HMM. Experimental results are presented on the performance of both the schemes.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 1999
A. N. Rajagopalan; Subhasis Chaudhuri
In this paper, we propose a MAP-Markov random field (MRF) based scheme for recovering the depth and the focused image of a scene from two defocused images. The space-variant blur parameter and the focused image of the scene are both modeled as MRFs and their MAP estimates are obtained using simulated annealing. The scheme is amenable to the incorporation of smoothness constraints on the spatial variations of the blur parameter as well as the scene intensity. It also allows for inclusion of line fields to preserve discontinuities. The performance of the proposed scheme is tested on synthetic as well as real data and the estimates of the depth are found to be better than that of the existing window-based depth from defocus technique. The quality of the space-variant restored image of the scene is quite good even under severe space-varying blurring conditions.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 1997
A. N. Rajagopalan; Subhasis Chaudhuri
In this paper, we propose a regularized solution to the depth from defocus (DFD) problem using the space-frequency representation (SFR) framework. A smoothness constraint is imposed on the estimates of the blur parameter, and a variational approach to the DFD problem is developed. Among the numerous SFRs, we study the applicability of the complex spectrogram and the Wigner distribution, in particular, for depth recovery. The performance of the proposed variational method is tested on both synthetic and real images. The method yields good results, and the quality of the estimates is significantly better than that obtained without the smoothness constraint on the blur parameter.
Computer Vision and Image Understanding | 1997
A. N. Rajagopalan; Subhasis Chaudhuri
The recovery of depth from defocused images involves calculating the depth of various points in a scene by modeling the effect that the focal parameters of the camera have on images acquired with a small depth of field. In the approach to depth from defocus (DFD), previous methods assume the depth to be constant over fairly large local regions and estimate the depth through inverse filtering by considering the system to be shift-invariant over those local regions. But a subimage when analyzed in isolation introduces errors in the estimate of the depth. In this paper, we propose two new approaches for estimating the depth from defocused images. The first approach proposed here models the DFD system as a block shift-variant one and incorporates the interaction of blur among neighboring subimages in an attempt to improve the estimate of the depth. The second approach looks at the depth from defocus problem in the space-frequency representation framework. In particular, the complex spectrogram and the Wigner distribution are shown to be likely candidates for recovering the depth from defocused images. The performances of the proposed methods are tested on both synthetic and real images. The proposed methods yield good results and the quality of the estimates obtained using these methods is compared with the existing method.