Peizhong Liu
Huaqiao University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Peizhong Liu.
Multimedia Tools and Applications | 2018
Yanmin Luo; Liang Zhao; Peizhong Liu; Detian Huang
It is a challenging task to recognize smoke from visual scenes due to large variations in the feature of color, texture, shapes, etc. The current detection algorithms are mainly based on single feature or fusion of multiple static features of smoke, which leads to low detection accuracy. To solve this problem, this paper proposes a smoke detection algorithm based on the motion characteristics of smoke and the convolutional neural networks (CNN). Firstly, a moving object detection algorithm based on background dynamic update and dark channel priori is proposed to detect the suspected smoke regions. Then, the features of suspected region is extracted automatically by CNN, on that the smoke identification is performed. Compared to previous work, our algorithm improves the detection accuracy, which can reach 99% in the testing sets. For the problem that the region of smoke is relatively small in the early stage of smoke generation, the strategy of implicit enlarging the suspected regions is proposed, which improves the timeliness of smoke detection. In addition a fine-tuning method is proposed to solve the problem of scarce of data in the training network. Also, the algorithm has good smoke detection performance by testing under various video scenes.
Multimedia Tools and Applications | 2018
Hongxiang Wang; Peizhong Liu; Yongzhao Du; Xiaofang Liu
According to the lack of spatio-temporal information of convolution neural network abstraction, an online visual tracking algorithm based on convolution neural network is proposed, combining the spatio-temporal context model to the order filter of convolution neural network. Firstly, the initial target is preprocessed and the target spatial model is extracted, the spatio-temporal context model is obtained by the spatio-temporal information. The first layer adopts the spatio-temporal context model to convolve the input to obtain the simple layer feature. The second layer starts with skip the spatio-temporal context model to get a set of convolution filters, convolving with the simple features of the first layer to extract the target abstract features, and then the deep expression of the target can be obtained by superimposing the convolution results of the simple layer. Finally, the target tracking is realized by sparse updating method combining with particle filter tracking framework. Experiments show that deep abstract feature extracted by online convolution network structure combining with spatio-temporal context model, can preserve spatio-temporal information and improve the background clutters, illumination variation, low resolution, occlusion and scale variation and the tracking efficiency under complex background.
Multimedia Tools and Applications | 2018
Yanmin Luo; Zhitong Xu; Peizhong Liu; Yongzhao Du; Jing-Ming Guo
Human pose estimation, especially multi-person pose estimation, is vital for understanding human abnormal behavior. In this paper, we develop a fractal hourglass model to automatically regress human body joints, and propose a layered double-way inference algorithm to calculate the affinity between neighboring skeleton joints. Firstly, the original hourglass resident unit was replaced and the candidate skeleton joints location heatmap regression process was described. And then, we determine the specific body joints location and optimize the regression results. Next, the double-way conditional probabilities between adjacent joints is defined as joints pairwise affinity, and is applied to match adjacent human body part. What’s more, we adopt the spatial distance constraint to refine body joints matching result. Finally, we connect the best matching joints-pair, and iterate the process until all candidate joints are assigned into individual. Extensive experiments on the MPII multi-person subset and the COCO 2016 keypoints challenge show the effectiveness of our method, outperforming the second best method (Associative Embedding) by 0.45 and 1.20%.
Information-an International Interdisciplinary Journal | 2018
Zhi Chen; Peizhong Liu; Yongzhao Du; Yanmin Luo; Wancheng Zhang
Correlation filter (CF) based tracking algorithms have shown excellent performance in comparison to most state-of-the-art algorithms on the object tracking benchmark (OTB). Nonetheless, most CF based tracking algorithms only consider limited single channel feature, and the tracking model always updated from frame-by-frame. It will generate some erroneous information when the target objects undergo sophisticated scenario changes, such as background clutter, occlusion, out-of-view, and so forth. Long-term accumulation of erroneous model updating will cause tracking drift. In order to address problems that are mentioned above, in this paper, we propose a robust multi-scale correlation filter tracking algorithm via self-adaptive fusion of multiple features. First, we fuse powerful multiple features including histogram of oriented gradients (HOG), color name (CN), and histogram of local intensities (HI) in the response layer. The weights assigned according to the proportion of response scores that are generated by each feature, which achieve self-adaptive fusion of multiple features for preferable feature representation. In the meantime the efficient model update strategy is proposed, which is performed by exploiting a pre-defined response threshold as discriminative condition for updating tracking model. In addition, we introduce an accurate multi-scale estimation method integrate with the model update strategy, which further improves the scale variation adaptability. Both qualitative and quantitative evaluations on challenging video sequences demonstrate that the proposed tracker performs superiorly against the state-of-the-art CF based methods.
Australasian Physical & Engineering Sciences in Medicine | 2018
Yuling Fan; Peizhong Liu; Jianeng Tang; Yanmin Luo; Yongzhao Du
For the diagnosis and treatment of breast tumors, the automatic detection of glands is a crucial step. The true segmentation of the gland is directly related to effective treatment effect of the patient. Therefore, it is necessary to propose an automatic segmentation algorithm based on mammary gland features. A segmentation method of differential evolution (DE) fuzzy entropy based on mammary gland is proposed in the paper. According to the image fuzzy entropy, the evaluation function of image segmentation is constructed in the first step. Then, the method adopts DE, the image fuzzy entropy parameter is regard as the initial population of individual. After the mutation, crossover and selection of three evolutionary processes to search for the maximum fuzzy entropy of parameters, the optimal threshold of the segmented gland is achieved. Finally, the mammary gland is segmented by the threshold method of maximum fuzzy entropy. Eight breast images with four tissue types are tested 100 times, with accuracy (Acc), sensitivity (Sen), specificity (Spe), positive predictive value (PPV), negative predicted value (NPV), and average structural similarity (Mssim) to measure the segmentation result. The Acc of the proposed algorithm is 98.46 ± 8.02E−03%, 95.93 ± 2.38E−02%, 93.88 ± 6.59E−02%, 94.73 ± 1.82E−01%, 96.19 ± 1.15E−02%, and 97.51 ± 1.36E−02%, 96.64 ± 6.35E−02%, and 94.76 ± 6.21E−02%, respectively. The mean Mssim values of the 100 tests were 0.985, 0.933, 0.924, 0.907, 0.984, 0.928, 0.938, and 0.941, respectively. Our proposed algorithm is more effective and robust in comparison to the other fuzzy entropy based on swarm intelligent optimization algorithms. The experimental results show that the proposed algorithm has higher accuracy in the segmentation of mammary glands, and may serve as a gold standard in the analysis of treatment of breast tumors.
Multimedia Tools and Applications | 2017
Peizhong Liu; Ming Hong; Minghang Wang; Peiting Gu; Detian Huang
Since a human face could be represented by a few landmarks with less redundant information, and calculated by a linear combination of a small number of prototypical faces, we propose a two-step 3D face reconstruction approach including landmark depth estimation and shape deformation. The proposed approach allows us to reconstruct a realistic 3D face from a 2D frontal face image. First, we apply a coupled dictionary learning method based on sparse representation to explore the underlying mappings between pair of 2D and 3D training landmarks. In the method, a weighted l1 norm sparsity function is introduced to better pursuit the l0 norm sparsity. Then, the depth of the landmarks could be estimated. Second, we propose a novel shape deformation method to reconstruct the 3D face by combining a small number of most relevant deformed faces which are obtained by the estimated landmarks. The sparsity regulation is also introduced to find the relevant faces in the second step. The proposed approach could explore the distributions of 2D and 3D faces and the underlying mappings between them well, because human faces are represented by low-dimensional landmarks, and their distributions are described by sparse representations. Moreover, it is much more flexible since we can make any change in any step. Extensive experiments are conducted on BJUT_3D database, and the results validate the effectiveness of the proposed approach.
Multimedia Tools and Applications | 2016
Yanmin Luo; Detian Huang; Peizhong Liu; Hsuan-Ming Feng
IEEE Transactions on Image Processing | 2019
Yanmin Luo; Zhitong Xu; Peizhong Liu; Yongzhao Du; Jing-Ming Guo
電腦學刊 | 2018
Xiaofang Liu; Peizhong Liu; Yan-Ming Luo; Jia-Neng Tang; Detian Huang; Yongzhao Du
Journal of Optical Technology | 2018
Yongzhao Du; Hongxiang Wang; Peizhong Liu; Yuqing Fu