Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yuncong Feng is active.

Publication


Featured researches published by Yuncong Feng.


Signal Processing | 2014

Multi-focus image fusion using image-partition-based focus detection

Xiaoli Zhang; Xiongfei Li; Zhaojun Liu; Yuncong Feng

Abstract Focus detection based fusion algorithm is a vital alternative in multi-focus image fusion applications. In this kind of fusion algorithms, focus detection measure is a key factor. However, nearly all of them tend to make incorrect predictions in the smooth regions which are close to edges and textures, because these regions are affected by edges and textures and intensities become quite different if they are blurred. In this paper, we propose a new focus detection based multi-focus image fusion algorithm. First of all, the source images are partitioned into three parts: edges, textures, and smooth regions. Pixels in smooth regions are further classified into two catalogues according to their distances from edges or textures. Then, we formulate a new focus detection rule in which pixels in smooth parts are treated differently according to their classification. Finally, the fused image is achieved with the assistance of fusing map. The interests of algorithm lie in its ability of improving the accuracy of focus detection and eliminating blockiness in fused images. Experimental results have shown that the proposed fusion algorithm retains good ratings by Human Visual System (HVS) and objective measures compared to other multi-focus fusion algorithms.


Expert Systems With Applications | 2015

Image fusion with Internal Generative Mechanism

Xiaoli Zhang; Xiongfei Li; Yuncong Feng; Haoyu Zhao; Zhaojun Liu

The Internal Generative Mechanism is brought into image fusion.A refined saliency detection method is proposed.Experiments on various images tested the effectiveness of the algorithm. In this paper, an Internal Generative Mechanism (IGM) based fusion algorithm is proposed. In the algorithm, source images are decomposed into a coarse layer and a detail layer by simulating the mechanism of human visual system perceiving images; then, the algorithm fuses the detail layer using Pulse Coupled Neural Network (PCNN), and fuses the coarse layer by using the spectral residual based saliency method; finally, coefficients in all the fused layers are combined to obtain the final fused image. The interests of the algorithm lie in the fact that it accords with the basic principles of human visual system perceiving images and it can preserve detail information that exists in source images. Experiments on various images are conducted to test the effectiveness of the algorithm. The experimental results have shown that the final images fused by the proposed algorithm achieve satisfying visual perception; meanwhile, the algorithm is superior to other traditional algorithms in terms of objective measures.


Digital Signal Processing | 2017

A multi-scale 3D Otsu thresholding algorithm for medical image segmentation

Yuncong Feng; Haiying Zhao; Xiongfei Li; Xiaoli Zhang; Hongpeng Li

Abstract Thresholding technique is one of the most imperative practices to accomplish image segmentation. In this paper, a novel thresholding algorithm based on 3D Otsu and multi-scale image representation is proposed for medical image segmentation. Considering the high time complexity of 3D Otsu algorithm, an acceleration variant is invented using dimension decomposition rule. In order to reduce the effects of noises and weak edges, multi-scale image representation is brought into the segmentation algorithm. The whole segmentation algorithm is designed as an iteration procedure. In each iteration, the image is segmented by the efficient 3D Otsu, and then it is filtered by a fast local Laplacian filtering to get a smoothed image which will be input into the next iteration. Finally, the segmentation results are pooled to get a final segmentation using majority voting rules. The attractive features of the algorithm are that its segmentation results are stable, it is robust to noises and it holds for both bi-level and multi-level thresholding cases. Experiments on medical MR brain images are conducted to demonstrate the effectiveness of the proposed method. The experimental results indicate that the proposed algorithm is superior to the other multilevel thresholding algorithms consistently.


Multimedia Tools and Applications | 2017

Image fusion based on simultaneous empirical wavelet transform

Xiaoli Zhang; Xiongfei Li; Yuncong Feng

In this paper, a new multi-scale image fusion algorithm for multi-sensor images is proposed based on Empirical Wavelet Transform (EWT). Different from traditional wavelet transform, the wavelets of EWT are not fixed, but the ones generated according to the processed signals themselves, which ensures that these wavelets are optimal for processed signals. In order to make EWT can be used in image fusion, Simultaneous Empirical Wavelet Transform (SEWT) for 1D and 2D signals are proposed, by which different signals can be projected into the same wavelet set generated according to all the signals. The fusion algorithm constructed on the 2D SEWT contains three steps: source images are decomposed into a coarse layer and a detail layer first; then, the algorithm fuses detail layers using maximum absolute values, and fuses coarse layers using the maximum global contrast selection; finally, coefficients in all the fused layers are combined to obtain the final fused image using 2D inverse SEWT. Experiments on various images are conducted to examine the performance of the proposed algorithm. The experimental results have shown that the fused images obtained by the proposed algorithm achieve satisfying visual perception; meanwhile, the algorithm is superior to other traditional algorithms in terms of objective measures.


Signal Processing | 2016

A new multifocus image fusion based on spectrum comparison

Xiaoli Zhang; Xiongfei Li; Yuncong Feng

In this paper, a spectrum comparison based multifocus image fusion algorithm is proposed. A distinctive feature of the proposed algorithm is that it constructs a global focus detection algorithm, which makes it get free of block artifacts and reduces the loss of contrast in the fused image. In this algorithm, source images are first transformed into Fourier space, in which we adopt the Bayesian prediction algorithm to smooth the log spectrum of each source image. By comparing the difference between the original log spectrum and its smoothed version, we can get the saliency region of each source image. Then image segmentation based on Sobel operator is employed to identify the smooth regions that may be affected by edges or textures, finally a sigmoid function is utilized to map the saliency comparison results to focus detection results in which affected smooth regions are treated in a different way. Experimental results demonstrate the superiority of the proposed method in terms of subjective and objective evaluation. Drawbacks of the existing image blocks selection methods are analyzed.A new focus detection method is proposed to reduce the lost of contrast.A sigmoid function is used in the fusion rule to make fused images more nature.The proposed algorithm holds for both gray-gray and color-color image fusion.


Signal Processing | 2015

The use of ROC and AUC in the validation of objective image fusion evaluation metrics

Xiaoli Zhang; Xiongfei Li; Yuncong Feng; Zhaojun Liu

Objective image fusion evaluation metrics play a vital role in choosing proper fusion algorithms and optimizing parameters in the field of image fusion. However, little effort has been made on their validation. In this paper, we proposed a novel validation method using ROC (Receiver Operating Characteristic) curve and AUC (the Area Under the ROC Curve). The proposed method takes the predicted quality scores into account rather than just counting how many fused images are correctly evaluated, which makes it more discriminating than other existing methods. Experimental results show that it is a reliable and precise validation method of objective fusion evaluation metrics. This paper is of particular interest to researchers focusing on objective fusion metric designing and those constructing image sets for testing objective fusion evaluation metrics. HighlightsAnalyzing drawbacks of the existing validation methods of image fusion metrics.ROC curves are adopted to validate objective image fusion evaluation metrics.The method takes the score given to each fused image into account.The method can be easily extended to other fields.


Signal Processing | 2016

A weighted-ROC graph based metric for image segmentation evaluation

Yuncong Feng; Xuanjing Shen; Haipeng Chen; Xiaoli Zhang

Evaluation of image segmentation algorithms is a crucial task in the image processing field. Generally, traditional objective evaluation measures, such as ME and JS, always give the same treatment to the object pixels and the background pixels in images, which is not reasonable in practical applications. To overcome this problem, a new objective evaluation metric based on the weighted-ROC graph is proposed in this paper. Considering that pixels in different positions may gain different importance, each pixel is given a weight based on its spatial information. The ROC (receiver operating characteristic) graph with weighting strategy is constructed to evaluate the performance of segmentation algorithms quantitatively. The proposed metric focuses on the segmented objects, which is similar to human visual system. Meanwhile, it reserves the robustness of ROC against the region imbalance. The experimental results on various images show that the proposed metric gives more reasonable evaluation results than other metrics. The importance of each pixel is taken into account in the evaluating process.Weighted strategy is applied to the ROC graph.The problem of distorted evaluation on images with region imbalance is eliminated.The assessment results are more in line with the subjective evaluation results.The metric holds for both illumination images and non-uniform illumination images.


Multimedia Tools and Applications | 2017

Segmentation fusion based on neighboring information for MR brain images

Yuncong Feng; Xuanjing Shen; Haipeng Chen; Xiaoli Zhang

In this paper, we study on how to boost image segmentation algorithms. First of all, a novel fusion scheme is proposed to combine different segmentations with mutual information to reduce misclassified pixels and obtain an accurate segmentation. As the class label of each pixel depends on the pixel’s gray level and neighbors’ labels, the fusion scheme takes both spatial and intensity information of pixels into account. Then, a detail thresholding segmentation case is designed using the proposed fusion scheme. In the case, the local Laplacian filter is used to get the smoothed version of original image. To accelerate segmentation, a discrete curve evolution based Otsu method is employed to segment the original image and its smoothed version to get two different segmentation maps. The fusion scheme is used to fuse the two maps to get the final segmentation result. Experiments on medical MR-T2 brain images are conducted to demonstrate the effectiveness of the proposed segmentation fusion method. The experimental results indicate that the proposed algorithm can improve segmentation accuracy and it is superior to other multilevel thresholding methods.


international conference on multimedia and expo | 2016

A semi-automatic brain tumor segmentation algorithm

Xiaoli Zhang; Xiongfei Li; Hongpeng Li; Yuncong Feng

In this paper, a novel semi-automatic segmentation algorithm is proposed to segment brain tumors from magnetic resonance imaging (MRI) images. First, an edge-aware filter is used to get the smoothed version of the original image. Secondly, Otsu based multilevel thresholding is performed on the smoothed image and the original image, respectively. Then the two segmentation maps are fused by the rule of K Nearest Neighbors (KNN) to obtain the refined segmentation result. The combination of the three steps can be denoted as multi-scale Otsu based segmentation. Finally, a bi-directional region growing method is employed to segment the brain tumor region around seeds which are inserted by the user. The proposed algorithm is tested on MRI-T2 images and it produces promising result: the segmented tumor regions are more accurate compared to those obtained by other state-of-the-art methods.


Pattern Recognition and Image Analysis | 2017

Image clustering segmentation based on SLIC superpixel and transfer learning

Xiang Li; Xuanjing Shen; Hai Peng Chen; Yuncong Feng

Traditional fuzzy C-means clustering algorithm has poor noise immunity and clustering results in image segmentation. To overcome this problem, a novel image clustering algorithm based on SLIC superpixel and transfer learning is proposed in this paper. In the proposed algorithm, SLIC superpixel method is used to improve the edge matching degree of image segmentation and enhances the robustness to noise. Transfer learning is adopted to correct the image segmentation result and further improve the accuracy of image segmentation. In addition, the proposed algorithm improves the original SLIC superpixel algorithm and makes the edge of the superpixel more accurate. Experimental results show that the proposed algorithm can obtain better segmentation results.

Collaboration


Dive into the Yuncong Feng's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge