Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Xi Cai is active.

Publication


Featured researches published by Xi Cai.


Journal of Electronic Imaging | 2014

Improved visual background extractor using an adaptive distance threshold

Guang Han; Jinkuan Wang; Xi Cai

Abstract. Camouflage is a challenging issue in moving object detection. Even the recent and advanced background subtraction technique, visual background extractor (ViBe), cannot effectively deal with it. To better handle camouflage according to the perception characteristics of the human visual system (HVS) in terms of minimum change of intensity under a certain background illumination, we propose an improved ViBe method using an adaptive distance threshold, named IViBe for short. Different from the original ViBe using a fixed distance threshold for background matching, our approach adaptively sets a distance threshold for each background sample based on its intensity. Through analyzing the performance of the HVS in discriminating intensity changes, we determine a reasonable ratio between the intensity of a background sample and its corresponding distance threshold. We also analyze the impacts of our adaptive threshold together with an update mechanism on detection results. Experimental results demonstrate that our method outperforms ViBe even when the foreground and background share similar intensities. Furthermore, in a scenario where foreground objects are motionless for several frames, our IViBe not only reduces the initial false negatives, but also suppresses the diffusion of misclassification caused by those false negatives serving as erroneous background seeds, and hence shows an improved performance compared to ViBe.


Journal of Computers | 2013

Image Fusion Method Based on Directional Contrast-Inspired Unit-Linking Pulse Coupled Neural Networks in Contourlet Domain

Xi Cai; Guang Han; Jinkuan Wang

To take full advantage of global features of source images, we propose an image fusion method based on adaptive unit-linking pulse coupled neural networks (ULPCNNs) in the contourlet domain. Considering that each high-frequency subband after the contourlet decomposition has rich directional information, we employ directional contrast of each coefficient as the external stimulus to inspire each neuron. Linking range is also related to the contrast in order to adaptively improve the global coupling characteristics of ULPCNNs. In this way, biological activity of human visual systems to detailed information of images can be simulated by the output pulses of the ULPCNNs. The first firing time of each neuron is utilized to determine the fusion rule for corresponding detailed coefficients. Experimental results indicate the superiority of our proposed algorithm, for multifocus images, remote sensing images, and infrared and visible images, in terms of visual effects and objective evaluations.


Sensors | 2016

Background Subtraction Based on Three-Dimensional Discrete Wavelet Transform

Guang Han; Jinkuan Wang; Xi Cai

Background subtraction without a separate training phase has become a critical task, because a sufficiently long and clean training sequence is usually unavailable, and people generally thirst for immediate detection results from the first frame of a video. Without a training phase, we propose a background subtraction method based on three-dimensional (3D) discrete wavelet transform (DWT). Static backgrounds with few variations along the time axis are characterized by intensity temporal consistency in the 3D space-time domain and, hence, correspond to low-frequency components in the 3D frequency domain. Enlightened by this, we eliminate low-frequency components that correspond to static backgrounds using the 3D DWT in order to extract moving objects. Owing to the multiscale analysis property of the 3D DWT, the elimination of low-frequency components in sub-bands of the 3D DWT is equivalent to performing a pyramidal 3D filter. This 3D filter brings advantages to our method in reserving the inner parts of detected objects and reducing the ringing around object boundaries. Moreover, we make use of wavelet shrinkage to remove disturbance of intensity temporal consistency and introduce an adaptive threshold based on the entropy of the histogram to obtain optimal detection results. Experimental results show that our method works effectively in situations lacking training opportunities and outperforms several popular techniques.


International Journal of Machine Learning and Cybernetics | 2017

Background subtraction based on modified online robust principal component analysis

Guang Han; Jinkuan Wang; Xi Cai

In video surveillance, camera jitter occurs frequently and poses a great challenge to foreground detection. To overcome this challenge without any additional anti-jitter preprocessing, we propose a background subtraction method based on modified online robust principal component analysis (ORPCA). We modify the original ORPCA algorithm by introducing a prior-information-based adaptive weighting parameter to make our method adapt to variation of sparsity of foreground objects among frames, which can substantially improve the accuracy of foreground detection. In detail, we utilize sparsity of our foreground detection result of the last frame as the prior information, and adaptively adjust the weighting parameter of the sparse term for the current frame. Moreover, to make the modified ORPCA applicable to foreground detection, we also reduce the dimension of input frames through representing unoverlapped blocks by their median values. Different from recent advanced methods that rely on pixel-based background models, our method utilizes the low-dimensional subspace constructed by backgrounds of previous frames to estimate background of a new input frame, and hence can well handle the camera jitter. Experimental results demonstrate that, our method achieves remarkable results and outperforms several advanced methods in coping with the camera jitter.


IEEE Transactions on Biomedical Engineering | 2017

Single-Camera-Based Method for Step Length Symmetry Measurement in Unconstrained Elderly Home Monitoring

Xi Cai; Guang Han; Xin Song; Jinkuan Wang

Objective: single-camera-based gait monitoring is unobtrusive, inexpensive, and easy-to-use to monitor daily gait of seniors in their homes. However, most studies require subjects to walk perpendicularly to cameras optical axis or along some specified routes, which limits its application in elderly home monitoring. To build unconstrained monitoring environments, we propose a method to measure step length symmetry ratio (a useful gait parameter representing gait symmetry without significant relationship with age) from unconstrained straight walking using a single camera, without strict restrictions on walking directions or routes. Methods: according to projective geometry theory, we first develop a calculation formula of step length ratio for the case of unconstrained straight-line walking. Then, to adapt to general cases, we propose to modify noncollinear footprints, and accordingly provide general procedure for step length ratio extraction from unconstrained straight walking. Results: Our method achieves a mean absolute percentage error (MAPE) of 1.9547% for 15 subjects’ normal and abnormal side-view gaits, and also obtains satisfactory MAPEs for non-side-view gaits (2.4026% for 45°-view gaits and 3.9721% for 30°-view gaits). The performance is much better than a well-established monocular gait measurement system suitable only for side-view gaits with a MAPE of 3.5538%. Conclusion: Independently of walking directions, our method can accurately estimate step length ratios from unconstrained straight walking. Significance: This demonstrates our method is applicable for elders’ daily gait monitoring to provide valuable information for elderly health care, such as abnormal gait recognition, fall risk assessment, etc.


ieee international conference on progress in informatics and computing | 2015

Video-based noncontact heart rate measurement using ear features

Xi Cai; Guang Han; Jinkuan Wang

Heart rate measurement, especially out of clinical environments, has recently become a hot issue in the field of health monitoring. Video-based heart rate measurement is contactless and unobtrusive. However, most video-based heart rate measurements are sensitive to illumination changes. Inspired by a previous study which measures heart rate according to ballistocardiographic motion sensed by an ear-worn tri-axial accelerometer, we proposes to detect the ballistocardiographic motion of ear by tracking ear features in a video. Then we provide a video-based noncontact heart rate monitoring method. Scale-invariant features are employed to describe the structural characteristics of the ear, invariant to illumination changes and partial occlusion, which helps achieve movements of the ear. Relying on the signals of movements of ear features, underlying ballistocardiographic motion signal is separated and spectrum analyzed to estimate the heart rate. Experimental results demonstrate that by monitoring videos our method can obtain accurate heart rates close to the measurements gained by a contact sensor.


Applied Mechanics and Materials | 2014

Background Subtraction Based on Pulse Coupled Neural Network

Xi Cai; Guang Han; Jin Kuan Wang

Dynamic environments often bring great challenges to moving object detection. Solving this problem will promote expansion of application fields of moving object detection. Unlike those background subtraction methods using local feature-based background models, inspired by integrity of human visual perception, we present a background subtraction method for moving object detection in dynamic environments, building its background models based on global features extracted by pulse coupled neural network. We employ the pulse coupled neural network to take good advantage of their global coupling characters, in order to imitate the human biological visual activity. After sensing images via the pulse coupled neural network, we extract global information of the scenes and then build background models robust to background disturbances based on the global features. Experimental results prove that, our method behaves well in terms of visual and quantitative evaluations for dynamic environments.


Advanced Materials Research | 2014

Multiwavelet-Based Image Fusion Method Using Unit-Linking Pulse Coupled Neural Networks

Xi Cai; Han Guang; Jin Kuan Wang

To simulate biological activities of human visual system to details and make full use of global features of source images, we propose a multiwavelet-based image fusion method using unit-linking pulse coupled neural networks (ULPCNNs) model. After motivated by external stimuli from images, ULPCNNs can produce series of binary pulses containing much global information. Then we employ the first firing time of each neuron as the salience measure. Experimental results demonstrate that, for multifocus images, remote sensing images, and infrared and visible images, our proposed method always generates satisfying fusion results.


Applied Mechanics and Materials | 2013

Image Fusion for Video Surveillance in Curvelet Domain

Xi Cai; Guang Han; Jin Kuan Wang

To simulate biological activities of human visual system, we propose a curvelet-based image fusion method using unit-linking pulse coupled neural networks (ULPCNNs) model. Contrasts of detailed coefficients are inputted into the ULPCNNs to imitate the sensitivity of HVS to detailed information, and the contrasts are also employed as corresponding linking strength for the neurons. After motivated by external stimuli from images, ULPCNNs can produce series of binary pulses containing much information of global features. Then we use the average firing times of output pulses in a neighborhood as the salience measure to determine our fusion rules. Experimental results demonstrate that, our proposed method has a satisfying fusion result both on visual effects and objective evaluations.


Advanced Materials Research | 2013

Image Fusion Method Based on Universal Hidden Markov Tree Model in the Contourlet Domain

Xi Cai; Guang Han; Jin Kuan Wang

Considering the statistical characteristics of contourlet coefficients of images, we propose an image fusion method based on the universal contourlet hidden Markov tree (uHMT) model. A salience measure and a match measure are presented according to the probability of a contourlet coefficient belonging to the high state of the uHMT model which needs no training. Experimental results prove the effectiveness of our method in the field of visual quality and objective evaluations.

Collaboration


Dive into the Xi Cai's collaboration.

Top Co-Authors

Avatar

Jinkuan Wang

Northeastern University

View shared research outputs
Top Co-Authors

Avatar

Guang Han

Northeastern University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xin Song

Northeastern University

View shared research outputs
Top Co-Authors

Avatar

Han Guang

Northeastern University

View shared research outputs
Researchain Logo
Decentralizing Knowledge