Sanping Zhou
Xi'an Jiaotong University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sanping Zhou.
computer vision and pattern recognition | 2016
De Cheng; Yihong Gong; Sanping Zhou; Jinjun Wang; Nanning Zheng
Person re-identification across cameras remains a very challenging problem, especially when there are no overlapping fields of view between cameras. In this paper, we present a novel multi-channel parts-based convolutional neural network (CNN) model under the triplet framework for person re-identification. Specifically, the proposed CNN model consists of multiple channels to jointly learn both the global full-body and local body-parts features of the input persons. The CNN model is trained by an improved triplet loss function that serves to pull the instances of the same person closer, and at the same time push the instances belonging to different persons farther from each other in the learned feature space. Extensive comparative evaluations demonstrate that our proposed method significantly outperforms many state-of-the-art approaches, including both traditional and deep network-based ones, on the challenging i-LIDS, VIPeR, PRID2011 and CUHK01 datasets.
Neurocomputing | 2016
Sanping Zhou; Jinjun Wang; Shun Zhang; Yudong Liang; Yihong Gong
This paper proposes a novel region-based active contour model in the level set formulation for medical image segmentation. We define a unified fitting energy framework based on Gaussian probability distributions to obtain the maximum a posteriori probability (MAP) estimation. The energy term consists of a global energy term to characterize the fitting of global Gaussian distribution according to the intensities inside and outside the evolving curve, as well as a local energy term to characterize the fitting of local Gaussian distribution based on the local intensity information. In the resulting contour evolution that minimizes the associated energy, the global energy term accelerates the evolution of the evolving curve far away from the objects, while the local energy term guides the evolving curve near the objects to stop on the boundaries. In addition, a weighting function between the local energy term and the global energy term is proposed by using the local and global variances information, which enables the model to select the weights adaptively in segmenting images with intensity inhomogeneity. Extensive experiments on both synthetic and real medical images are provided to evaluate our method, show significant improvements on both efficiency and accuracy, as compared with the popular methods. HighlightsBoth global and local intensity information are incorporated into our method to segment images with intensity inhomogeneity.Size information of local neighborhood partition is used to build the a priori probability model.A weight function between local energy term and global energy term is proposed.
Neurocomputing | 2016
Yudong Liang; Jinjun Wang; Sanping Zhou; Yihong Gong; Nanning Zheng
Deep convolutional neural network has been applied for single image super-resolution problem and demonstrated state-of-the-art quality. This paper presents several prior information that could be utilized during the training process of the deep convolutional neural network. The first type of prior focuses on edges and texture restoration in the output, and the second type of prior utilizes multiple upscaling factors to consider the structure recurrence across different scales. As demonstrated by our experimental results, the proposed framework could significantly accelerate the training speed for more than ten times and at the same time lead to better image quality. The generated super-resolution image achieves visually sharper and more pleasant restoration as well as superior objectively evaluation results compared to state-of-the-art methods.
computer vision and pattern recognition | 2017
Sanping Zhou; Jinjun Wang; Jiayun Wang; Yihong Gong; Nanning Zheng
Person re-identification (Re-ID) remains a challenging problem due to significant appearance changes caused by variations in view angle, background clutter, illumination condition and mutual occlusion. To address these issues, conventional methods usually focus on proposing robust feature representation or learning metric transformation based on pairwise similarity, using Fisher-type criterion. The recent development in deep learning based approaches address the two processes in a joint fashion and have achieved promising progress. One of the key issues for deep learning based person Re-ID is the selection of proper similarity comparison criteria, and the performance of learned features using existing criterion based on pairwise similarity is still limited, because only P2P distances are mostly considered. In this paper, we present a novel person Re-ID method based on P2S similarity comparison. The P2S metric can jointly minimize the intra-class distance and maximize the inter-class distance, while back-propagating the gradient to optimize parameters of the deep model. By utilizing our proposed P2S metric, the learned deep model can effectively distinguish different persons by learning discriminative and stable feature representations. Comprehensive experimental evaluations on 3DPeS, CUHK01, PRID2011 and Market1501 datasets demonstrate the advantages of our method over the state-of-the-art approaches.
Neurocomputing | 2017
Sanping Zhou; Jinjun Wang; Mengmeng Zhang; Qing Cai; Yihong Gong
This paper presents a novel correntropy-based level set method (CLSM) for medical image segmentation and bias field correction. Firstly, we build a local bias-field-corrected fitting image (LBFI) model in the level set formulation by simultaneously using the bias field information and the local intensity information. Then, a local bias-field-corrected image fitting (LBIF) energy is introduced by minimizing the difference between the LBFI and the input image in a neighborhood, which makes it effective in segmenting images with intensity inhomogeneity. Finally, by incorporating the correntropy criterion into the LBIF energy, the proposed CLSM can decrease the weights of the samples that are away from the intensity means, which is more robust to the effects of noise. The CLSM is then integrated with respect to the neighborhood center to give a global property of image segmentation and bias field correction. Extensive experiments on both synthetic images and real medical images are provided to evaluate our method, shown significant improvements on both efficiency and accuracy, as compared with the state-of-the-art methods.
Pattern Recognition | 2018
Sanping Zhou; Jinjun Wang; Deyu Meng; Xiaomeng Xin; Yubing Li; Yihong Gong; Nanning Zheng
Person re-identification (Re-ID) usually suffers from noisy samples with background clutter and mutual occlusion, which makes it extremely difficult to distinguish different individuals across the disjoint camera views. In this paper, we propose a novel deep self-paced learning (DSPL) algorithm to alleviate this problem, in which we apply a self-paced constraint and symmetric regularization to help the relative distance metric training the deep neural network, so as to learn the stable and discriminative features for person Re-ID. Firstly, we propose a soft polynomial regularizer term which can derive the adaptive weights to samples based on both the training loss and model age. As a result, the high-confidence fidelity samples will be emphasized and the low-confidence noisy samples will be suppressed at early stage of the whole training process. Such a learning regime is naturally implemented under a self-paced learning (SPL) framework, in which samples weights are adaptively updated based on both model age and sample loss using an alternative optimization method. Secondly, we introduce a symmetric regularizer term to revise the asymmetric gradient back-propagation derived by the relative distance metric, so as to simultaneously minimize the intra-class distance and maximize the inter-class distance in each triplet unit. Finally, we build a part-based deep neural network, in which the features of different body parts are first discriminately learned in the lower convolutional layers and then fused in the higher fully connected layers. Experiments on several benchmark datasets have demonstrated the superior performance of our method as compared with the state-of-the-art approaches.
pacific rim conference on multimedia | 2016
Sanping Zhou; Jinjun Wang; Qiqi Hou; Yihong Gong
This paper presents a deep ranking model with feature learning and fusion supervised by a novel contrastive loss function for person re-identification. Given the probe image set, we organize the training images into a batch of pairwise samples, each probe image with a matched or a mismatched reference from the gallery image set. Treating these pairwise samples as inputs, we build a part-based deep convolutional neural network (CNN) to generate the layered feature representations supervised by the proposed contrastive loss function, in which the intra-class distances are minimized and the inter-class distances are maximized. In the deep model, the feature of different body parts are first discriminately learned in the convolutional layers and then fused in the fully connected layers, which makes it able to extract discriminative features of different individuals. Extensive experiments on the public benchmark datasets are reported to evaluate our method, shown significant improvements on accuracy, as compared with the state-of-the-art approaches.
Pattern Recognition | 2018
Qing Cai; Huiying Liu; Sanping Zhou; Jingfeng Sun; Jing Li
Abstract The active contour model is a widely used method for image segmentation. Most existing active contour models yield poor performance when applied to images with severe intensity inhomogeneity. To address this issue, we propose an adaptive-scale active contour model (ASACM) based on image entropy and semi-naive Bayesian classifier, which achieves simultaneous segmentation and bias field estimation for images with severe intensity inhomogeneity. Firstly, an adaptive scale operator is constructed to adaptively adjust the scale of the ASACM according to the degree of the intensity inhomogeneity. Secondly, we define an improved bias field estimation term via distributing a dependent-membership function for each pixel to estimate the bias field in severe inhomogeneous images. Thirdly, a new penalty term is proposed using piecewise polynomial, which helps to avoid time-consuming re-initialization process and instability in conventional penalty term. The experimental results demonstrate that the proposed ASACM consistently outperforms many state-of-the-art models in segmentation accuracy, segmentation efficiency and robustness w.r.t initialization and noise.
Pattern Recognition | 2018
Jiayun Wang; Sanping Zhou; Jinjun Wang; Qiqi Hou
IEEE Transactions on Multimedia | 2018
Sanping Zhou; Jinjun Wang; Rui Shi; Qiqi Hou; Yihong Gong; Nanning Zheng