Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zongyi Liu is active.

Publication


Featured researches published by Zongyi Liu.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2006

Improved gait recognition by gait dynamics normalization

Zongyi Liu; Sudeep Sarkar

Potential sources for gait biometrics can be seen to derive from two aspects: gait shape and gait dynamics. We show that improved gait recognition can be achieved after normalization of dynamics and focusing on the shape information. We normalize for gait dynamics using a generic walking model, as captured by a population hidden Markov model (pHMM) defined for a set of individuals. The states of this pHMM represent gait stances over one gait cycle and the observations are the silhouettes of the corresponding gait stances. For each sequence, we first use Viterbi decoding of the gait dynamics to arrive at one dynamics-normalized, averaged, gait cycle of fixed length. The distance between two sequences is the distance between the two corresponding dynamics-normalized gait cycles, which we quantify by the sum of the distances between the corresponding gait stances. Distances between two silhouettes from the same generic gait stance are computed in the linear discriminant analysis space so as to maximize the discrimination between persons, while minimizing the variations of the same subject under different conditions. The distance computation is constructed so that it is invariant to dilations and erosions of the silhouettes. This helps us handle variations in silhouette shape that can occur with changing imaging conditions. We present results on three different, publicly available, data sets. First, we consider the HumanID gait challenge data set, which is the largest gait benchmarking data set that is available (122 subjects), exercising five different factors, i.e., viewpoint, shoe, surface, carrying condition, and time. We significantly improve the performance across the hard experiments involving surface change and briefcase carrying conditions. Second, we also show improved performance on the UMD gait data set that exercises time variations for 55 subjects. Third, on the CMU Mobo data set, we show results for matching across different walking speeds. It is worth noting that there was no separate training for the UMD and CMU data sets.


international conference on pattern recognition | 2004

Simplest representation yet for gait recognition: averaged silhouette

Zongyi Liu; Sudeep Sarkar

We present a robust representation for gait recognition that is compact, easy to construct, and affords efficient matching. Instead of a time series based representation comprising of a sequence of raw silhouette frames or of features extracted therein, as has been the practice, we simply align and average the silhouettes over one gait cycle. We then base recognition on the Euclidean distance between these averaged silhouette representations. We show, using the recently formulated gait challenge problem (www.gaitchallenge.org), that the improvement in execution time is 30 times while possessing recognition power that is comparable to the gait baseline algorithm, which is becoming the comparison standard in gait recognition. Experiments with portions of the average silhouette representation show that recognition power is not entirely derived from upper body shape, rather the dynamics of the legs also contribute equally to recognition. However, this study does raise intriguing doubts about the need for accurate shape and dynamics representations for gait recognition.


systems man and cybernetics | 2005

Effect of silhouette quality on hard problems in gait recognition

Zongyi Liu; Sudeep Sarkar

Gait as a behavioral biometric has been the subject of recent investigations. However, understanding the limits of gait-based recognition and the quantitative study of the factors effecting gait have been confounded by errors in the extracted silhouettes, upon which most recognition algorithms are based. To enable us to study this effect on a large population of subjects, we present a novel model based silhouette reconstruction strategy, based on a population based hidden Markov model (HMM), coupled with an eigen-stance model, to correct for common errors in silhouette detection arising from shadows and background subtraction. The model is trained and benchmarked using manually specified silhouettes for 71 subjects from the recently formulated HumanID Gait Challenge database. Unlike other essentially pixel-level silhouette cleaning methods, this method can remove shadows, especially between feet for the legs-apart stance, and remove parts due to any objects being carried, such as briefcase or a walking cane. After quantitatively establishing the improved quality of the silhouette over simple background subtraction, we show on the 122 subjects HumanID Gait Challenge Dataset and using two gait recognition algorithms that the observed poor performance of gait recognition for hard problems involving matching across factors such as surface, time, and shoe are not due to poor silhouette quality, beyond what is available from statistical background subtraction based methods.


Image and Vision Computing | 2007

Outdoor recognition at a distance by fusing gait and face

Zongyi Liu; Sudeep Sarkar

We explore the possibility of using both face and gait in enhancing human recognition at a distance performance in outdoor conditions. Although the individual performance of gait and face based biometrics at a distance under outdoor illumination conditions, walking surface changes, and time variations are poor, we show that recognition performance is significantly enhanced by combination of face and gait. For gait, we present a new recognition scheme that relies on computing distances based on selected, discriminatory, gait stances. Given a gait sequence, covering multiple gait cycles, it identifies the salient stances using a population hidden Markov model (HMM). An averaged representation of the detected silhouettes for these stances are then built using eigenstance shape models. Similarity between two gait sequences is based on the similarities of these averaged representations of the salient stances. This gait recognition strategy, which essentially emphasizes shape over dynamics, significantly outperforms the HumanID Gait Challenge baseline algorithm. For face, which is a mature biometric for which many recognition algorithms exists, we chose the elastic bunch graph matching based face recognition method. This method was found to be the best in the FERET 2000 studies. On a gallery database of 70 individuals and two probe sets: one with 39 individuals taken on the same day and the other with 21 individuals taken at least 3 months apart, results indicate that although the verification rate at 1% false alarm rate of individual biometrics are low, their combination performs better. Specifically, for data taken on the same day, individual verification rates are 42% and 40% for face and gait, respectively, but is 73% for their combination. Similarly, for the data taken with at least 3 months apart, the verification rates are 48% and 25% for face and gait, respectively, but is 60% for their combination. We also find that the combination of outdoor gait and one outdoor face per person is superior to using two outdoor face probes per person or using two gait probes per person, which can considered to be statistical controls for showing improvement by biometric fusion.


international conference on pattern recognition | 2008

Robust outdoor text detection using text intensity and shape features

Zongyi Liu; Sudeep Sarkar

Recognizing texts from camera images is a known hard problem because of the difficulties in text segmentation from the varied and complicated backgrounds. In this paper, we propose an algorithm that employs two novel filters and a basic component-based text detection framework. The framework uses the Niblack algorithm to threshold images and groups components into regions with commonly used geometry features. The intensity filter considers the overlap between the intensity histogram of a component and that of its adjoining area. For non-text regions, we have found that this overlap is large, and so we can prune out components with large values of this measure. The shape filter, on the other hand, deletes regions whose constituent components come from a same object, as most words consist of different characters. The proposed method is evaluated with the text locating database with 249 images used in the ICDAR2003 robust reading competition. The result shows that the algorithm is robust to both indoor images and outdoor images, even for the images of complex background, which usually is a hard factor to overcome for traditional component-based algorithms. In terms of performance statistics, we tested the algorithm on the ICDAR 2003 challenge experiment, and the algorithm achieves 66% precision rate (p), 46% recall rate (r), and 54% the combined rate ( f ), which is the best reported in the literature on this dataset.


Biometric Technology for Human Identification | 2004

Toward understanding the limits of gait recognition

Zongyi Liu; Laura Malave; Adebola Osuntogun; Preksha Sudhakar; Sudeep Sarkar

Most state of the art video-based gait recognition algorithms start from binary silhouettes. These silhouettes, defined as foreground regions, are usually detected by background subtraction methods, which results in holes or missed parts due to similarity of foreground and background color, and boundary errors due to video compression artifacts. Errors in low-level representation make it hard to understand the effect of certain conditions, such as surface and time, on gait recognition. In this paper, we present a part-level, manual silhouette database consisting of 71 subjects, over one gait cycle, with differences in surface, shoe-type, carrying condition, and time. We have a total of about 11,000 manual silhouette frames. The purpose of this manual silhouette database is twofold. First, this is a resource that we make available at http://www.GaitChallenge.org for use by the gait community to test and design better silhouette detection algorithms. These silhouettes can also be used to learn gait dynamics. Second, using the baseline gait recognition algorithm, which was specified along with the HumanID Gait Challenge problem, we show that performance from manual silhouettes is similar and only sometimes better than that from automated silhouettes detected by statistical background subtraction. Low performances when comparing sequences with differences in walking surfaces and time-variation are not fully explained by silhouette quality. We also study the recognition power in each body part and show that recognition based on just the legs is equal to that from the whole silhouette. There is also significant recognition power in the head and torso shape.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2005

The humanID gait challenge problem: data sets, performance, and analysis

Sudeep Sarkar; P J. Phillips; Zongyi Liu; Isidro Robledo Vega; Patrick J. Grother; Kevin W. Bowyer


computer vision and pattern recognition | 2004

Studies on silhouette quality and gait recognition

Zongyi Liu; Laura Malave; Sudeep Sarkar


computer vision and pattern recognition | 2004

Challenges in Segmentation of Human Forms in Outdoor Video

Zongyi Liu; Sudeep Sarkar


Encyclopedia of Biometrics | 2009

Evaluation of Gait Recognition.

Sudeep Sarkar; Zongyi Liu

Collaboration


Dive into the Zongyi Liu's collaboration.

Top Co-Authors

Avatar

Sudeep Sarkar

University of South Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Adebola Osuntogun

University of South Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

P J. Phillips

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Patrick J. Grother

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Preksha Sudhakar

University of South Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge