Kun Zhan
Lanzhou University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kun Zhan.
IEEE Transactions on Neural Networks | 2009
Kun Zhan; Hongjuan Zhang; Yide Ma
Based on the studies of existing local-connected neural network models, in this brief, we present a new spiking cortical neural networks model and find that time matrix of the model can be recognized as a human subjective sense of stimulus intensity. The series of output pulse images of a proposed model represents the segment, edge, and texture features of the original image, and can be calculated based on several efficient measures and forms a sequence as the feature of the original image. We characterize texture images by the sequence for an invariant texture retrieval. The experimental results show that the retrieval scheme is effective in extracting the rotation and scale invariant features. The new model can also obtain good results when it is used in other image processing applications.
Image and Vision Computing | 2010
Yide Ma; Li Liu; Kun Zhan; Yongqing Wu
The pulse-coupled neural network (PCNN) has been widely used in image processing. The outputs of PCNN represent unique features of original stimulus and are invariant to translation, rotation, scaling and distortion, which is particularly suitable for feature extraction. In this paper, PCNN and intersecting cortical model (ICM), which is a simplified version of PCNN model, are applied to extract geometrical changes of rotation and scale invariant texture features, then an one-class support vector machine based classification method is employed to train and predict the features. The experimental results show that the pulse features outperform of the classic Gabor features in aspects of both feature extraction time and retrieval accuracy, and the proposed one-class support vector machine based retrieval system is more accurate and robust to geometrical changes than the traditional Euclidean distance based system.
Neurocomputing | 2014
Nianyi Wang; Yide Ma; Kun Zhan
Spiking Cortical Model (SCM) is derived from primate visual cortex. It has a high sensitivity for low intensities of stimulus, but low sensitivity for high intensities, and is suitable for image processing. This paper adopts an improved SCM for multifocus image fusion. Firstly we analyze and compare various image clarity measures, and then we propose a new SCM fusion method based on a composite image clarity criterion which synthesizes virtues of two classic clarity criteria. As to the iteration number of SCM model for image processing, we introduce time matrix as an adaptive setting method instead of using fixed constant, which can automatically and adaptively calculate iteration number for each image accurately. Besides, we optimize pulsing output matrix of source image according to natural optical focus principle before forming and outputting the final fused image. In order to verify the effectiveness of the proposed method, we compare it with other ten methods under four fusion effect evaluation indices. The experimental results show that the proposed approach can obtain better fusion results than others, and is an effective multifocus image fusion method.
Journal of Multimedia | 2013
Nianyi Wang; Yide Ma; Kun Zhan; Min Yuan
In this paper, we present a new medical image fusion algorithm based on nonsubsampled contourlet transform (NSCT) and spiking cortical model (SCM). The flexible multi-resolution, anisotropy, and directional expansion characteristics of NSCT are associated with global coupling and pulse synchronization features of SCM. Considering the human visual system characteristics, two different fusion rules are used to fuse the low and high frequency sub-bands respectively. Firstly, maximum selection rule (MSR) is used to fuse low frequency coefficients. Secondly, spatial frequency (SF) is applied to motivate SCM network rather than using coefficients value directly, and then the time matrix of SCM is set as criteria to select coefficients of high frequency subband. The effectiveness of the proposed algorithm is achieved by the comparison with existing fusion methods.
Neurocomputing | 2017
Kun Zhan; Jinhui Shi; Jicai Teng; Qiaoqiao Li; Mingying Wang; Fuxiang Lu
A general image enhancement framework is proposed.A neural network LSCN is designed.A network iterative stopping condition is proposed.We find that the final linking synaptic state is related with the stimulus image.The comparisons indicate the efficiency of the proposed algorithms. Linking synaptic computation network is proposed. The linking synapse is introduced into the neural network inspired by the gamma band oscillations in visual cortical neurons, and the neural network is applied to image representation. The linking synaptic mechanism of the network allows integrating temporal and spatial information. An image is input to the network and the enhanced result is obtained by the final linking synaptic state. The visual performance of the results boosts the details while preserving the information in the input image. The effectiveness of the method has been borne out by five quantitative metrics as well as qualitative comparisons with other methods.
Neural Computation | 2016
Kun Zhan; Jicai Teng; Jinhui Shi; Qiaoqiao Li; Mingying Wang
Inspired by gamma-band oscillations and other neurobiological discoveries, neural networks research shifts the emphasis toward temporal coding, which uses explicit times at which spikes occur as an essential dimension in neural representations. We present a feature-linking model (FLM) that uses the timing of spikes to encode information. The first spiking time of FLM is applied to image enhancement, and the processing mechanisms are consistent with the human visual system. The enhancement algorithm achieves boosting the details while preserving the information of the input image. Experiments are conducted to demonstrate the effectiveness of the proposed method. Results show that the proposed method is effective.
IEEE Transactions on Industrial Informatics | 2018
Zhiqiang Zeng; Zhihui Li; De Cheng; Huaxiang Zhang; Kun Zhan; Yi Yang
Video-based pedestrian reidentification is an emerging task in video surveillance and is closely related to several real-world applications. Its goal is to match pedestrians across multiple nonoverlapping network cameras. Despite the recent effort, the performance of pedestrian reidentification needs further improvement. Hence, we propose a novel two-stream multirate recurrent neural network for video-based pedestrian reidentification with two inherent advantages: First, capturing the static spatial and temporal information; Second,Author: Figure II is not cited in the text. Please cite it at the appropriate place. dealing with motion speed variance. Given video sequences of pedestrians, we start with extracting spatial and motion features using two different deep neural networks. Then, we explore the feature correlation which results in a regularized fusion network integrating the two aforementioned networks. Considering that pedestrians, sometimes even the same pedestrian, move in different speeds across different camera views, we extend our approach by feeding the two networks into a multirate recurrent network to exploit the temporal correlations. Extensive experiments have been conducted on two real-world video-based pedestrian reidentification benchmarks: iLIDS-VID and PRID 2011 datasets. The experimental results confirm the efficacy of the proposed method. Our code will be released upon acceptance.
international symposium on neural networks | 2015
Kun Zhan; Jinhui Shi; Qiaoqiao Li; Jicai Teng; Mingying Wang
Spiking cortical model (SCM) is applied to image segmentation. A natural image is processed to produce a series of spike images by SCM, and the segmented result is obtained by the integration of the series of spike images. An appropriate maximum iterative times is selected to achieve an optimal threshold of SCM. In each iteration, neurons that produced spikes correspond to pixels with an intensity of the input natural image approximately. SCM synchronizes the output spikes via the fast linking synaptic modulation, which makes objects in the image as homogeneous as possible. Experimental results show that the output image not only separates objects and background well, but also pixels in each object are homogeneous. The proposed method performs well over other methods and the quantitative metrics are consistent with the visual performance.
IEEE Transactions on Systems, Man, and Cybernetics | 2018
Jing Wang; Feng Tian; Hongchuan Yu; Chang Hong Liu; Kun Zhan; Xiao Wang
Non-negative matrix factorization (NMF), a method for finding parts-based representation of non-negative data, has shown remarkable competitiveness in data analysis. Given that real-world datasets are often comprised of multiple features or views which describe data from various perspectives, it is important to exploit diversity from multiple views for comprehensive and accurate data representations. Moreover, real-world datasets often come with high-dimensional features, which demands the efficiency of low-dimensional representation learning approaches. To address these needs, we propose a diverse NMF (DiNMF) approach. It enhances the diversity, reduces the redundancy among multiview representations with a novel defined diversity term and enables the learning process in linear execution time. We further propose a locality preserved DiNMF (LP-DiNMF) for more accurate learning, which ensures diversity from multiple views while preserving the local geometry structure of data in each view. Efficient iterative updating algorithms are derived for both DiNMF and LP-DiNMF, along with proofs of convergence. Experiments on synthetic and real-world datasets have demonstrated the efficiency and accuracy of the proposed methods against the state-of-the-art approaches, proving the advantages of incorporating the proposed diversity term into NMF.
Journal of Electronic Imaging | 2015
Kun Zhan; Qiaoqiao Li; Jicai Teng; Mingying Wang; Jinhui Shi
Abstract. We address the problem of fusing multifocus images based on the phase congruency (PC). PC provides a sharpness feature of a natural image. The focus measure (FM) is identified as strong PC near a distinctive image feature evaluated by the complex Gabor wavelet. The PC is more robust against noise than other FMs. The fusion image is obtained by a new fusion rule (FR), and the focused region is selected by the FR from one of the input images. Experimental results show that the proposed fusion scheme achieves the fusion performance of the state-of-the-art methods in terms of visual quality and quantitative evaluations.