Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sudeep Sarkar is active.

Publication


Featured researches published by Sudeep Sarkar.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2003

Comparison and combination of ear and face images in appearance-based biometrics

Kyong I. Chang; Kevin W. Bowyer; Sudeep Sarkar; Barnabas Victor

Researchers have suggested that the ear may have advantages over the face for biometric recognition. Our previous experiments with ear and face recognition, using the standard principal component analysis approach, showed lower recognition performance using ear images. We report results of similar experiments on larger data sets that are more rigorously controlled for relative quality of face and ear images. We find that recognition performance is not significantly different between the face and the ear, for example, 70.5 percent versus 71.6 percent, respectively, in one experiment. We also find that multimodal recognition using both the ear and face results in statistically significant improvement over either individual biometric, for example, 90.9 percent in the analogous experiment.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1997

A robust visual method for assessing the relative performance of edge-detection algorithms

Michael D. Heath; Sudeep Sarkar; Thomas Sanocki; Kevin W. Bowyer

A new method for evaluating edge detection algorithms is presented and applied to measure the relative performance of algorithms by Canny, Nalwa-Binford, Iverson-Zucker, Bergholm, and Rothwell. The basic measure of performance is a visual rating score which indicates the perceived quality of the edges for identifying an object. The process of evaluating edge detection algorithms with this performance measure requires the collection of a set of gray-scale images, optimizing the input parameters for each algorithm, conducting visual evaluation experiments and applying statistical analysis methods. The novel aspect of this work is the use of a visual task and real images of complex scenes in evaluating edge detectors. The method is appealing because, by definition, the results agree with visual evaluations of the edge images.


Computer Vision and Image Understanding | 1998

Comparison of Edge Detectors

Michael D. Heath; Sudeep Sarkar; Thomas Sanocki; Kevin W. Bowyer

Because of the difficulty of obtaining ground truth for real images, the traditional technique for comparing low-level vision algorithms is to present image results, side by side, and to let the reader subjectively judge the quality. This is not a scientifically satisfactory strategy. However, human rating experiments can be done in a more rigorous manner to provide useful quantitative conclusions. We present a paradigm based on experimental psychology and statistics, in which humans rate the output of low level vision algorithms. We demonstrate the proposed experimental strategy by comparing four well-known edge detectors: Canny, Nalwa?Binford, Sarkar?Boyer, and Sobel. We answer the following questions: Is there a statistically significant difference in edge detector outputs as perceived by humans when considering an object recognition task? Do the edge detection results of an operator vary significantly with the choice of its parameters? For each detector, is it possible to choose a single set of optimal parameters for all the images without significantly affecting the edge output quality? Does an edge detector produce edges of the same quality for all images, or does the edge quality vary with the image?


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2006

Improved gait recognition by gait dynamics normalization

Zongyi Liu; Sudeep Sarkar

Potential sources for gait biometrics can be seen to derive from two aspects: gait shape and gait dynamics. We show that improved gait recognition can be achieved after normalization of dynamics and focusing on the shape information. We normalize for gait dynamics using a generic walking model, as captured by a population hidden Markov model (pHMM) defined for a set of individuals. The states of this pHMM represent gait stances over one gait cycle and the observations are the silhouettes of the corresponding gait stances. For each sequence, we first use Viterbi decoding of the gait dynamics to arrive at one dynamics-normalized, averaged, gait cycle of fixed length. The distance between two sequences is the distance between the two corresponding dynamics-normalized gait cycles, which we quantify by the sum of the distances between the corresponding gait stances. Distances between two silhouettes from the same generic gait stance are computed in the linear discriminant analysis space so as to maximize the discrimination between persons, while minimizing the variations of the same subject under different conditions. The distance computation is constructed so that it is invariant to dilations and erosions of the silhouettes. This helps us handle variations in silhouette shape that can occur with changing imaging conditions. We present results on three different, publicly available, data sets. First, we consider the HumanID gait challenge data set, which is the largest gait benchmarking data set that is available (122 subjects), exercising five different factors, i.e., viewpoint, shoe, surface, carrying condition, and time. We significantly improve the performance across the hard experiments involving surface change and briefcase carrying conditions. Second, we also show improved performance on the UMD gait data set that exercises time variations for 55 subjects. Third, on the CMU Mobo data set, we show results for matching across different walking speeds. It is worth noting that there was no separate training for the UMD and CMU data sets.


computer vision and pattern recognition | 1996

Quantitative measures of change based on feature organization: eigenvalues and eigenvectors

Sudeep Sarkar; Kim L. Boyer

We propose four measures of image organizational change which can be used to monitor construction activity. The measures are based on the thesis that the progress of construction will see a change in the individual image feature attributes as well as an evolution in the relationships among these features. This change in the relationship is captured by the eigenvalues and eigenvectors of the relation graph embodying the organization among the image features. We demonstrate the ability of the measures to differentiate between no development, the onset of construction, and full development, on the available real test image set.


international conference on pattern recognition | 2004

Simplest representation yet for gait recognition: averaged silhouette

Zongyi Liu; Sudeep Sarkar

We present a robust representation for gait recognition that is compact, easy to construct, and affords efficient matching. Instead of a time series based representation comprising of a sequence of raw silhouette frames or of features extracted therein, as has been the practice, we simply align and average the silhouettes over one gait cycle. We then base recognition on the Euclidean distance between these averaged silhouette representations. We show, using the recently formulated gait challenge problem (www.gaitchallenge.org), that the improvement in execution time is 30 times while possessing recognition power that is comparable to the gait baseline algorithm, which is becoming the comparison standard in gait recognition. Experiments with portions of the average silhouette representation show that recognition power is not entirely derived from upper body shape, rather the dynamics of the legs also contribute equally to recognition. However, this study does raise intriguing doubts about the need for accurate shape and dynamics representations for gait recognition.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2000

Supervised learning of large perceptual organization: graph spectral partitioning and learning automata

Sudeep Sarkar; Padmanabhan Soundararajan

Perceptual organization offers an elegant framework to group low-level features that are likely to come from a single object. We offer a novel strategy to adapt this grouping process to objects in a domain. Given a set of training images of objects in context, the associated learning process decides on the relative importance of the basic salient relationships such as proximity, parallelness, continuity, junctions, and common region toward segregating the objects from the background. The parameters of the grouping process are cast as probabilistic specifications of Bayesian networks that need to be learned. This learning is accomplished using a team of stochastic automata in an N-player cooperative game framework. The grouping process, which is based on graph partitioning is able to form large groups from relationships defined over a small set of primitives and is fast. We statistically demonstrate the robust performance of the grouping and the learning frameworks on a variety of real images. Among the interesting conclusions is the significant role of photometric attributes in grouping and the ability to form large salient groups from a set of local relations, each defined over a small number of primitives.


systems man and cybernetics | 1993

Perceptual organization in computer vision: a review and a proposal for a classificatory structure

Sudeep Sarkar; Kim L. Boyer

The role of perceptual organization in computer vision systems is explored. This is done from four vantage points. A brief history of perceptual organization research in both humans and computer vision is offered. A classificatory structure in which to cast perceptual organization research to clarify both the nomenclature and the relationships among the many contributions is proposed. The perceptual organization work in computer vision in the context of this classificatory structure is reviewed. The array of computational techniques applied to perceptual organization problems in computer vision is surveyed. >


international conference on pattern recognition | 2002

The gait identification challenge problem: data sets and baseline algorithm

P J. Phillips; Sudeep Sarkar; I. Robledo; Patrick J. Grother; Kevin W. Bowyer

Recognition of people through gait analysis is an important research topic, with potential applications in video surveillance, tracking, and monitoring. Recognizing the importance of evaluating and comparing possible competing solutions to this problem, we previously introduced the HumanID challenge problem consisting of a set of experiments of increasing difficulty, a baseline algorithm, and a large set of video sequences (about 300 GB of data related to 452 sequences from 74 subjects) acquired to investigate important dimensions to this problem, such as variations due to viewpoint, footwear and walking surface. In this paper we present a detailed investigation of the baseline algorithm, quantify the dependence of the various covariates on gait-based identification, and update the previous baseline performance with optimized ones. We establish that the performance of the baseline algorithm is robust with respect to its various parameters. The overall identification performance is also stable with respect to the quality of the silhouettes. We find that the approximately lower 20% of the silhouette accounts for most of the recognition achieved. Viewpoint has barely statistically significant effect on identification rates, whereas footwear and surface-type does have significant effects with the effect due to surface-type being approximately 5 times that of shoe-type.


computer vision and pattern recognition | 1996

Comparison of edge detectors: a methodology and initial study

Michael D. Heath; Sudeep Sarkar; Thomas Sanocki; Kevin W. Bowyer

The purpose of this paper is to describe a new (to computer vision) experimental framework which allows us to make quantitative comparisons using subjective ratings made by people. This approach avoids the issue of pixel-level ground truth. As a result, it does not allow us to make statements about the frequency of false positive and false negative errors at the pixel level. Instead, using experimental design and statistical techniques borrowed from Psychology, we make statements about whether the outputs of one edge detector are rated statistically significantly higher than the outputs of another. This approach offers itself as a nice complement to signal-based quantitative measures. Also, the evaluation paradigm in this paper is goal oriented; in particular, we consider edge detection in the context of object recognition. The human judges rate the edge, detectors based on how well the capture the salient features of real objects. So far, edge detection modules have been designed and evaluated in isolation, except for the recent work by Ramesh and Haralick (1992). The only prior work (that we are aware of) which also uses humans to rate image algorithms is that of Reeves and Higdon (1995). They use human ratings to decide on regularization parameters of image restoration. Fram and Deutch (1975) also used human subjects, however, the focus was on human versus machine performance rather than using human ratings to compare different edge detectors. The use of human judges to rate image outputs mist be approached systematically. Experiments must be designed and conducted carefully, and results interpreted with appropriate statistical tools. The use of statistical analysis in vision system performance characterization has been rare. The only prior work in the area that we are aware of is that of Nair et al. (1995), who used statistical ranking procedures to compare neural network based object recognition systems.

Collaboration


Dive into the Sudeep Sarkar's collaboration.

Top Co-Authors

Avatar

Dmitry B. Goldgof

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Kim L. Boyer

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Sanjukta Bhanja

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Leonid V. Tsap

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Rangachar Kasturi

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Yong Zhang

Youngstown State University

View shared research outputs
Top Co-Authors

Avatar

Barbara L. Loeding

Florida Polytechnic University

View shared research outputs
Top Co-Authors

Avatar

Matthew Shreve

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Zongyi Liu

University of South Florida

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge