Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zezhi Chen is active.

Publication


Featured researches published by Zezhi Chen.


Computer Vision and Image Understanding | 2014

A self-adaptive Gaussian mixture model

Zezhi Chen; Tim Ellis

Abstract Most background modeling techniques use a single leaning rate of adaptation that is inadequate for real scenes because the background model is unable to effectively deal with both slow and sudden illumination changes. This paper presents an algorithm based on a self-adaptive Gaussian mixture to model the background of a scene imaged by a static video camera. Such background modeling is used in conjunction with foreground detection to find objects of interest that do not belong to the background. The model uses a dynamic learning rate with adaptation to global illumination to cope with sudden variations of scene illumination. The algorithm performance is benchmarked using the video sequences created for the Background Models Challenge (BMC) [1] . Experimental results are compared with the performance of other algorithms benchmarked with the BMC dataset, and demonstrate comparable detection rates.


international conference on intelligent transportation systems | 2012

Vehicle detection, tracking and classification in urban traffic

Zezhi Chen; Tim Ellis; Sergio A. Velastin

This paper presents a system for vehicle detection, tracking and classification from roadside CCTV. The system counts vehicles and separates them into four categories: car, van, bus and motorcycle (including bicycles). A new background Gaussian Mixture Model (GMM) and shadow removal method have been used to deal with sudden illumination changes and camera vibration. A Kalman filter tracks a vehicle to enable classification by majority voting over several consecutive frames, and a level set method has been used to refine the foreground blob. Extensive experiments with real world data have been undertaken to evaluate system performance. The best performance results from training a SVM (Support Vector Machine) using a combination of a vehicle silhouette and intensity-based pyramid HOG features extracted following background subtraction, classifying foreground blobs with majority voting. The evaluation results from the videos are encouraging: for a detection rate of 96.39%, the false positive rate is only 1.36% and false negative rate 4.97%. Even including challenging weather conditions, classification accuracy is 94.69%.


international conference on intelligent computing | 2009

Road vehicle classification using Support Vector Machines

Zezhi Chen; Nick Pears; Michael J. Freeman; Jim Austin

The Support Vector Machine (SVM) provides a robust, accurate and effective technique for pattern recognition and classification. Although the SVM is essentially a binary classifier, it can be adopted to handle multi-class classification tasks. The conventional way to extent the SVM to multi-class scenarios is to decompose an m-class problem into a series of two-class problems, for which either the one-vs-one (OVO) or one-vs-all (OVA) approaches are used. In this paper, a practical and systematic approach using a kernelised SVM is proposed and developed such that it can be implemented in embedded hardware within a road-side camera. The foreground segmentation of the vehicle is obtained using a Gaussian mixture model background subtraction algorithm. The feature vector describing the foreground (vehicle) silhouette encodes size, aspect ratio, width, solidity in order to classify vehicle type (car, van, HGV), In addition 3D colour histograms are used to generate a feature vector encoding vehicle color. The good recognition rates achieved in the our experiments indicate that our approach is well suited for pragmatic embedded vehicle classification applications.


international conference on intelligent transportation systems | 2011

Vehicle type categorization: A comparison of classification schemes

Zezhi Chen; Tim Ellis; Sergio A. Velastin

This paper describes research to classify road vehicles into a range of broad categories using simple measures of size and shape derived from view-dependent binary silhouettes using images derived from a static roadside CCTV camera. A novel approach to camera calibration utilizes calibrated images mapped by Google Earth to provide accurately-surveyed scene geometry that is manually corresponded with visible groundplane landmarks in the CCTV images. In the experiments reported here, manual segmentation is used to delineate vehicles in the images and a set of scaled features is extracted from each binary silhouette. Classification assigns each blob to one of four vehicle classes (car, van, bus and bicycle/motorcycle) using two feature-based classifiers (SVM and random forests) and a model-based approach. Results are presented for 10-fold cross validation study involving over 2000 manually labeled silhouettes. A peak classification performance of 96.26% is observed for SVM.


international conference on computer vision | 2011

Self-adaptive Gaussian mixture model for urban traffic monitoring system

Zezhi Chen; Tim Ellis

Identifying moving vehicles is a critical task for an urban traffic monitoring system. With static cameras, background subtraction techniques are commonly used to separate foreground moving objects from background at the pixel level. Gaussian mixture model is commonly used for background modelling. Most background modelling techniques use a single leaning rate of adaptation which is inadequate for complex scenes as the background model cannot deal with sudden illumination changes. In this paper, we propose a self-adaptive Gaussian mixture model to address these problems. We introduce an online dynamical learning rate and global illumination of background model adaptation to deal with fast changing scene illumination. Results of experiments using manuallyannotated urban traffic video with sudden illumination changes illustrate that our algorithm achieves consistently better performance in terms of ROC curve, detection accuracy, Matthews correction coefficient and Jaccard coefficient compared with other approaches based on the widely-used Gaussian mixture model.


digital image computing: techniques and applications | 2011

Multi-shape Descriptor Vehicle Classification for Urban Traffic

Zezhi Chen; Tim Ellis

This paper investigates the effectiveness of state-of-the-art classification algorithms to categorise road vehicles for an urban traffic monitoring system using a multi-shape descriptor. The analysis is applied to monocular video acquired from a static pole-mounted road side CCTV camera on a busy street. Manual vehicle segmentation was used to acquire a large (>2000 sample) database of labelled vehicles from which a set of measurement-based features (MBF) in combination with a pyramid of HOG (histogram of orientation gradients, both edge and intensity based) features. These are used to classify the objects into four main vehicle categories: car, van, bus and motorcycle. Results are presented for a number of experiments that were conducted to compare support vector machines (SVM) and random forests (RF) classifiers. 10-fold cross validation has been used to evaluate the performance of the classification methods. The results demonstrate that all methods achieve a recognition rate above 95% on the dataset, with SVM consistently outperforming RF. A combination of MBF and IPHOG features gave the best performance of 99.78%.


international symposium on visual computing | 2009

Background Subtraction in Video Using Recursive Mixture Models, Spatio-Temporal Filtering and Shadow Removal

Zezhi Chen; Nick Pears; Michael J. Freeman; Jim Austin

We describe our approach to segmenting moving objects from the color video data supplied by a nominally stationary camera. There are two main contributions in our work. The first contribution augments Zivkovic and Heijdens recursively updated Gaussian mixture model approach, with a multi-dimensional Gaussian kernel spatio-temporal smoothing transform. We show that this improves the segmentation performance of the original approach, particularly in adverse imaging conditions, such as when there is camera vibration. Our second contribution is to present a comprehensive comparative evaluation of shadow and highlight detection appoaches, which is an essential component of background subtraction in unconstrained outdoor scenes. A comparative evelaution of these approaches over different color-spaces is currently lacking in the literature. We show that both segmentation and shadow removal performs best when we use RGB color spaces.


international conference on image processing | 2007

Video Object Tracking Based on a Chamfer Distance Transform

Zezhi Chen; Zsolt Levente Husz; Iain Wallace; Andrew M. Wallace

This paper describes the use of variable kernels based on the normalized Chamfer distance transform (NCDT) for mean shift, object tracking in colour video sequences. This replaces the more usual Epanechnikov kernel, improving target representation and localization without increasing the processing time, minimising the distance between successive frame RGB distributions using the Bhattacharya coefficient. The target shape which defines the NCDT is found either by regional segmentation or background-difference imaging, dependent on the nature of the video sequence. The improved performance is demonstrated on a number of colour video sequences.


british machine vision conference | 2007

Active Segmentation and Adaptive Tracking Using Level Sets

Zezhi Chen; Andrew M. Wallace

We describe algorithms for active segmentation (AS) of the fi rst frame, and subsequent, adaptive object tracking through succeeding frames, in a video sequence. Object boundaries that include different known colours are segmented against complex backgrounds; it is not necessary for the object to be homogeneous. As the object moves, we develop a tracking algorithm that adaptively changes the colour space model (CSM) according to measures of similarity between object and background. We employ a kernel weighted by the normalized Chamfer distance transform, that changes shape according to a level set definition, to correspond to changes in the percei ved 2D contour as the object rotates or deforms. This improves target representation and localisation. Experiments conducted on various synthetic and real colour images illustrate the segmentation and tracking capabilit y and versatility of the algorithm in comparison with results using previously published methods.


advanced video and signal based surveillance | 2013

Automatic lane detection from vehicle motion trajectories

Zezhi Chen; Tim Ellis

Lane detection is important in intelligent transportation systems. This paper presents a novel algorithm for vehicle motion trajectory and lane boundary detection that uses Gaussian mixture model-based background subtraction and active contours. The algorithm uses an adaptive GMM that can cope with sudden illumination changes for detecting moving vehicles (resulting in a road score map, RSmap), followed by a Kalman filter tracker to generate pixel-level motion vectors. A novel active contour energy expression based on the accumulation of motion trajectories and the spatio-temporal RSmaps is used to detect lane boundaries. Experimental results are presented for video from a real road scene to show the effectiveness of the proposed algorithm, without the need for road lane markings.

Collaboration


Dive into the Zezhi Chen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kevin Hammond

University of St Andrews

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge