Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Aihua Zheng is active.

Publication


Featured researches published by Aihua Zheng.


international conference on image and graphics | 2007

Large-Scale Graph Database Indexing Based on T-mixture Model and ICA

Bin Luo; Aihua Zheng; Jin Tang; Haifeng Zhao

This paper proposes an indexing scheme based on t- mixture model and ICA, which is more robust than Gaussian mixture modeling when atypical points (or outliers) exist or the set of data has heavy tail. This indexing scheme combines optimized vector quantizer and probabilistic approximate-based indexing scheme. Experimental results on large-scale graph database show a notable efficiency improvement with optimistic precision.


Cognitive Computation | 2015

A Biologically Inspired Vision-Based Approach for Detecting Multiple Moving Objects in Complex Outdoor Scenes

Zhengzheng Tu; Aihua Zheng; Erfu Yang; Bin Luo; Amir Hussain

In the human brain, independent components of optical flows from the medial superior temporal area are speculated for motion cognition. Inspired by this hypothesis, a novel approach combining independent component analysis (ICA) with principal component analysis (PCA) is proposed in this paper for multiple moving objects detection in complex scenes—a major real-time challenge as bad weather or dynamic background can seriously influence the results of motion detection. In the proposed approach, by taking advantage of ICA’s capability of separating the statistically independent features from signals, the ICA algorithm is initially employed to analyze the optical flows of consecutive visual image frames. As a result, the optical flows of background and foreground can be approximately separated. Since there are still many disturbances in the foreground optical flows in the complex scene, PCA is then applied to the optical flows of foreground components so that major optical flows corresponding to multiple moving objects can be enhanced effectively and the motions resulted from the changing background and small disturbances are relatively suppressed at the same time. Comparative experimental results with existing popular motion detection methods for challenging imaging sequences demonstrate that our proposed biologically inspired vision-based approach can extract multiple moving objects effectively in a complex scene.


Cognitive Computation | 2017

CLASS: Collaborative Low-Rank and Sparse Separation for Moving Object Detection

Aihua Zheng; Minghe Xu; Bin Luo; Zhili Zhou; Chenglong Li

Low-rank models have been successfully applied to background modeling and achieved promising results on moving object detection. However, the assumption that moving objects are modelled as sparse outliers limits the performance of these models when the sizes of moving objects are relatively large. Meanwhile, inspired by the visual system of human brain which can cognitively perceive the physical size of the object with different sizes of retina imaging, we propose a novel approach, called Collaborative Low-Rank And Sparse Separation (CLASS), for moving object detection. Given the data matrix that accumulates sequential frames from the input video, CLASS detects the moving objects as sparse outliers against the low-rank structure background while pursuing global appearance consistency for both foreground and background. The sparse and the global appearance consistent constraints are complementary but simultaneously competing, and thus CLASS can detect the moving objects with different sizes effectively. The smoothness constraints of object motion are also introduced in CLASS for further improving the robustness to noises. Moreover, we utilize the edge-preserving filtering method to substantially speed up CLASS without much losing its accuracy. The extensive experiments on both public and newly created video sequences suggest that CLASS achieves superior performance and comparable efficiency against other state-of-the-art approaches.


Pattern Recognition | 2012

Graph matching based on spectral embedding with missing value

Jin Tang; Bo Jiang; Aihua Zheng; Bin Luo

This paper proposes an efficient algorithm for inexact graph matching based on spectral embedding with missing value. We commence by building an association graph model based on initial matching algorithm. Then, by dot product representation of graph with missing value, a new embedding method (co-embedding), where the correspondences between unmatched nodes are treated as missing data in an association graph, is presented. At last, a new graph matching algorithm which alternates between the co-embedding and point pattern matching is proposed. Convictive experimental results on both synthetic and real-world data demonstrate the effectiveness of the proposed graph matching algorithm.


CCF Chinese Conference on Computer Vision | 2015

Motion Compensation Based Fast Moving Object Detection in Dynamic Background

Wei Zhang; Chenglong Li; Aihua Zheng; Jin Tang; Bin Luo

This paper investigates robust and fast moving object detection in dynamic background. A motion compensation based approach is proposed to maintain an online background model, then the moving objects are detected in a fast fashion. Specifically, the pixel-level background model is built for each pixel, and is represented by a set of pixel values drawn from its location and neighborhoods. Given the background models of previous frame, the edge-preserving optical flow algorithm is employed to estimate the motion of each pixel, followed by propagating their background models to the current frame. Each pixel can be classified as foreground or background pixel according to the compensated background model. Moreover, the compensated background model is updated online by a fast random algorithm to adapt the variation of background. Extensive experiments on collected challenging videos suggest that our method outperforms other state-of-the-art methods, and achieves 8 fps in efficiency.


IEEE Transactions on Systems, Man, and Cybernetics | 2018

A Subspace Learning Approach to Multishot Person Reidentification

Aihua Zheng; Xuehan Zhang; Bo Jiang; Bin Luo; Chenglong Li

This paper addresses the challenging problem of multishot person reidentification (Re-ID) in real world uncontrolled surveillance systems. A key issue is how to effectively represent and process the multiple data with various appearance information due to the variations of pose, occlusions, and viewpoints. To this end, this paper develops a novel subspace learning approach, which pursues regularized low-rank and sparse representation for multishot person Re-ID. For the images of a person crossing a certain camera, we assume that the appearances of those subset images with similar viewpoints against a camera draw from the same low-rank subspace, and all the images of a person under a camera lie on a union of low-rank subspaces. Based on this assumption, we propose to learn a nonnegative low-rank and sparse graph to represent the person images. Moreover, the recurring pattern prior is integrated into our model to refine the affinities among images. Extensive experiments on four public benchmark datasets yield impressive performance by improving 22.9% on imagery library for intelligent detection systems video re identification (iLIDS-VID), 42.4% on person RE-ID (PRID) dataset 2011, 39.7% and 30.6% on speech, audio, image, and video technology-SoftBio camera 3/8 and camera 5/8, respectively, and 1.6% on motion analysis and re identification set compared to the state-of-the-art methods.


Multimedia Tools and Applications | 2017

Local-to-global background modeling for moving object detection from non-static cameras

Aihua Zheng; Lei Zhang; Wei Zhang; Chenglong Li; Jin Tang; Bin Luo

This paper investigates efficient and robust moving object detection from non-static cameras. To tackle the motion of background caused by moving cameras and to alleviate the interference of noises, we propose a local-to-global background model for moving object detection. Firstly, motion compensation based local location-specific background model is deployed to roughly detect the foreground regions in non-static cameras. More specifically, the local background model is built for each pixel and represented by a set of pixel values drawn from its location and neighborhoods. Each pixel can be classified as foreground or background pixel according to the compensated background model based on the fast optical flow. Secondly, we estimate the global background model by the rough superpixel-based background regions to further separate foregrounds from background accurately. In particular, we use the superpixel to generate the initial background regions based on the detection results generated by local background model to alleviate the noises. Then, a Gaussian Mixture Model (GMM) is estimated for the backgrounds on superpixel level to refine the foreground regions. Extensive experiments on newly created dataset, including 10 challenging video sequences recorded in PTZ cameras and hand-held cameras, suggest that our method outperforms other state-of-the-art methods in accuracy.


CCF Chinese Conference on Computer Vision | 2017

Moving Object Detection via Integrating Spatial Compactness and Appearance Consistency in the Low-Rank Representation

Minghe Xu; Chenglong Li; Hanqin Shi; Jin Tang; Aihua Zheng

Low-rank and sparse separation models have been successfully applied to background modeling and achieved promising results on moving object detection. It is still a challenging task in complex environment. In this paper, we propose to enforce the spatial compactness and appearance consistency in the low-rank and sparse separation framework. Given the data matrix that accumulates sequential frames from the input video, our model detects the moving objects as sparse outliers against the low-rank structure background. Furthermore, we explore the spatial compactness by enforcing the consistency among the pixels within the same superpixel. This strategy can simultaneously promote the appearance consistency since the superpixel is defined as the pixels with homogenous appearance nearby the neighborhood. The extensive experiments on public GTD dataset suggest that, our model can better preserve the boundary information of the objects and achieves superior performance against other state-of-the-arts.


software engineering research and applications | 2009

A Robust Approach to Subsequence Matching

Aihua Zheng; Jixin Ma; Miltos Petridis; Jin Tang; Bin Luo

In terms of a general time theory which addresses time-elements as typed point-based intervals, a formal characterization of time-series and state-sequences is introduced. Based on this framework, the subsequence matching problem is specially tackled by means of being transferred into bipartite graph matching problem. Then a hybrid similarity model with high tolerance of inversion, crossover and noise is proposed for matching the corresponding bipartite graphs involving both temporal and non-temporal measurements. Experimental results on reconstructed time-series data from UCI KDD Archive demonstrate that such an approach is more effective comparing with the traditional similarity model based algorithms, promising robust techniques for lager time-series databases and real-life applications such as Content-based Video Retrieval (CBVR), etc.


annual acis international conference on computer and information science | 2009

Temporal Pattern Recognition in Video Clips Detection

Aihua Zheng; Jixin Ma; Bin Luo; Miltos Petridis; Sulan Zhai; Jin Tang

Temporal representation and reasoning plays an important role in Data Mining and Knowledge Discovery, particularly, in mining and recognizing patterns with rich temporal information. Based on a formal characterization of time-series and state-sequences, this paper presents the computational technique and algorithm for matching state-based temporal patterns. As a case study of real-life applications, zone-defense pattern recognition in basketball games is specially examined as an illustrating example. Experimental results demonstrate that it provides a formal and comprehensive temporal ontology for research and applications in video events detection.

Collaboration


Dive into the Aihua Zheng's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jixin Ma

University of Greenwich

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge