Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zailiang Chen is active.

Publication


Featured researches published by Zailiang Chen.


Computerized Medical Imaging and Graphics | 2017

Retinal vessel segmentation in colour fundus images using Extreme Learning Machine.

Chengzhang Zhu; Beiji Zou; Rongchang Zhao; Jinkai Cui; Xuanchu Duan; Zailiang Chen; Yixiong Liang

Attributes of the retinal vessel play important role in systemic conditions and ophthalmic diagnosis. In this paper, a supervised method based on Extreme Learning Machine (ELM) is proposed to segment retinal vessel. Firstly, a set of 39-D discriminative feature vectors, consisting of local features, morphological features, phase congruency, Hessian and divergence of vector fields, is extracted for each pixel of the fundus image. Then a matrix is constructed for pixel of the training set based on the feature vector and the manual labels, and acts as the input of the ELM classifier. The output of classifier is the binary retinal vascular segmentation. Finally, an optimization processing is implemented to remove the region less than 30 pixels which is isolated from the retinal vascilar. The experimental results testing on the public Digital Retinal Images for Vessel Extraction (DRIVE) database demonstrate that the proposed method is much faster than the other methods in segmenting the retinal vessels. Meanwhile the average accuracy, sensitivity, and specificity are 0.9607, 0.7140 and 0.9868, respectively. Moreover the proposed method exhibits high speed and robustness on a new Retinal Images for Screening (RIS) database. Therefore it has potential applications for real-time computer-aided diagnosis and disease screening.


Computerized Medical Imaging and Graphics | 2017

A location-to-segmentation strategy for automatic exudate segmentation in colour retinal fundus images.

Qing Liu; Beiji Zou; Jie Chen; Wei Ke; Kejuan Yue; Zailiang Chen; Guoying Zhao

The automatic exudate segmentation in colour retinal fundus images is an important task in computer aided diagnosis and screening systems for diabetic retinopathy. In this paper, we present a location-to-segmentation strategy for automatic exudate segmentation in colour retinal fundus images, which includes three stages: anatomic structure removal, exudate location and exudate segmentation. In anatomic structure removal stage, matched filters based main vessels segmentation method and a saliency based optic disk segmentation method are proposed. The main vessel and optic disk are then removed to eliminate the adverse affects that they bring to the second stage. In the location stage, we learn a random forest classifier to classify patches into two classes: exudate patches and exudate-free patches, in which the histograms of completed local binary patterns are extracted to describe the texture structures of the patches. Finally, the local variance, the size prior about the exudate regions and the local contrast prior are used to segment the exudate regions out from patches which are classified as exudate patches in the location stage. We evaluate our method both at exudate-level and image-level. For exudate-level evaluation, we test our method on e-ophtha EX dataset, which provides pixel level annotation from the specialists. The experimental results show that our method achieves 76% in sensitivity and 75% in positive prediction value (PPV), which both outperform the state of the art methods significantly. For image-level evaluation, we test our method on DiaRetDB1, and achieve competitive performance compared to the state of the art methods.


Multimedia Systems | 2016

Saliency detection using boundary information

Beiji Zou; Qing Liu; Zailiang Chen; Shi-Jian Liu; Xiaoyun Zhang

Efficient and robust saliency detection is a fundamental problem in computer vision field for its wide applications, such as image segmentation and image retargeting, etc. In this paper, with the aim of uniformly highlighting the salient objects and suppressing the saliency of the background in images, we propose an efficient three-stage saliency detection method. First, boundary prior and connectivity prior are used to generate coarse saliency maps. To suppress the saliency value of the cluttered background, two supergraphs together with the adjacent graph are created so that the saliency of the background regions with similar appearances which are separated by other regions can be reduced effectively. Second, a local context-based saliency propagation is proposed to refine the saliency such that regions with similar features hold similar saliency. Finally, a logistic regressor is learned to combine the three refined saliency maps into the final saliency map automatically. The proposed method improves saliency detection on many cluttered images. The experimental results on two widely used public datasets with pixel accurate salient region annotations show that our method outperforms the state-of-the-art methods.


IEEE Transactions on Image Processing | 2017

Hierarchical Contour Closure-Based Holistic Salient Object Detection

Qing Liu; Xiaopeng Hong; Beiji Zou; Jie Chen; Zailiang Chen; Guoying Zhao

Most existing salient object detection methods compute the saliency for pixels, patches, or superpixels by contrast. Such fine-grained contrast-based salient object detection methods are stuck with saliency attenuation of the salient object and saliency overestimation of the background when the image is complicated. To better compute the saliency for complicated images, we propose a hierarchical contour closure-based holistic salient object detection method, in which two saliency cues, i.e., closure completeness and closure reliability, are thoroughly exploited. The former pops out the holistic homogeneous regions bounded by completely closed outer contours, and the latter highlights the holistic homogeneous regions bounded by averagely highly reliable outer contours. Accordingly, we propose two computational schemes to compute the corresponding saliency maps in a hierarchical segmentation space. Finally, we propose a framework to combine the two saliency maps, obtaining the final saliency map. Experimental results on three publicly available datasets show that even each single saliency map is able to reach the state-of-the-art performance. Furthermore, our framework, which combines two saliency maps, outperforms the state of the arts. Additionally, we show that the proposed framework can be easily used to extend existing methods and further improve their performances substantially.Most existing salient object detection methods compute the saliency for pixels, patches, or superpixels by contrast. Such fine-grained contrast-based salient object detection methods are stuck with saliency attenuation of the salient object and saliency overestimation of the background when the image is complicated. To better compute the saliency for complicated images, we propose a hierarchical contour closure-based holistic salient object detection method, in which two saliency cues, i.e., closure completeness and closure reliability, are thoroughly exploited. The former pops out the holistic homogeneous regions bounded by completely closed outer contours, and the latter highlights the holistic homogeneous regions bounded by averagely highly reliable outer contours. Accordingly, we propose two computational schemes to compute the corresponding saliency maps in a hierarchical segmentation space. Finally, we propose a framework to combine the two saliency maps, obtaining the final saliency map. Experimental results on three publicly available datasets show that even each single saliency map is able to reach the state-of-the-art performance. Furthermore, our framework, which combines two saliency maps, outperforms the state of the arts. Additionally, we show that the proposed framework can be easily used to extend existing methods and further improve their performances substantially.


Computers & Graphics | 2018

Classified optic disc localization algorithm based on verification model

Beiji Zou; Changlong Chen; Chengzhang Zhu; Xuanchu Duan; Zailiang Chen

Abstract Optic disc (OD) localization plays an important role in the automatic screening of ocular fundus diseases. However, it is still a challenge at present to balance the accuracy and efficiency of the OD localization for various of retinal fundus images. In this paper, we propose a new framework to integrate two classes methods based on image intensity and vascular information to obtain the OD location. The classification algorithm within the framework is based on a verification model. Firstly, an OD candidate region is obtained by image intensity. Secondly, the candidate region is validated by verification model. If the verification is passed, the corresponding position of the region is determined as the OD center. Otherwise, the OD is located by the parabola fitting of the main blood vessels and the relocation. The proposed method was evaluated on four public databases STARE, DRIVE, DIARETDB0 and DIARETDB1, and the accuracy rate was 96.3%, 100%, 100% and 100%, respectively. The running time is 0.05 s, 0.03 s, 0.13 s and 0.12 s per image through the validation in each database, while the time spent on images failed in verification is about 0.49 s, 0.38 s, 2.21 s and 2.15 s, individually.


Journal of Computer Science and Technology | 2018

3D Filtering by Block Matching and Convolutional Neural Network for Image Denoising

Beiji Zou; Yundi Guo; Qi He; Ping-Bo Ouyang; Ke Liu; Zailiang Chen

Block matching based 3D filtering methods have achieved great success in image denoising tasks. However, the manually set filtering operation could not well describe a good model to transform noisy images to clean images. In this paper, we introduce convolutional neural network (CNN) for the 3D filtering step to learn a well fitted model for denoising. With a trainable model, prior knowledge is utilized for better mapping from noisy images to clean images. This block matching and CNN joint model (BMCNN) could denoise images with different sizes and different noise intensity well, especially images with high noise levels. The experimental results demonstrate that among all competing methods, this method achieves the highest peak signal to noise ratio (PSNR) when denoising images with high noise levels (σ > 40), and the best visual quality when denoising images with all the tested noise levels.


Iet Image Processing | 2018

Improved multi-scale line detection method for retinal blood vessel segmentation

Kejuan Yue; Beiji Zou; Zailiang Chen; Qing Liu

Changes of retinal blood vessel are precursors of many serious diseases such as diabetic retinopathy, hypertension and cardiovascular diseases. Automatic segmentation of retinal blood vessels in the fundus image can better assist in the diagnosis of these diseases and has been studied by many researchers. However, the segmentation of pale vessel pixels remains a problem because of their low contrasts with surrounding pixels. This study proposes an improved multi-scale line detector to segment retinal vessels. It computes the line responses of vessels in multi-scale windows and takes the maximum as the response value, which can enhance the responses of pale vessel pixels near strong vessels or dark background pixels. Experimental results on the publicly available database DRIVE demonstrate that the proposed method can detect pale vessel pixels better. It achieves 75.28% in sensitivity and 94.47% in accuracy, which outperforms the state-of-the-art unsupervised methods. Compared with the supervised methods it also gets better sensitivity and comparable accuracy.


Journal of Computer Science and Technology | 2017

Automatic Anterior Lamina Cribrosa Surface Depth Measurement Based on Active Contour and Energy Constraint

Zailiang Chen; Peng Peng; Beiji Zou; Hailan Shen; Hao Wei; Rongchang Zhao

The lamina cribrosa is affected by intraocular pressure, which is the major risk of glaucoma. However, the capability to evaluate the lamina cribrosa in vivo has been limited until recently due to poor image quality and the posterior laminar displacement of glaucomatous eyes. In this study, we propose an automatic method to measure the anterior lamina cribrosa surface depth (ALCSD), including a method for detecting Bruch’s membrane opening (BMO) based on k-means and region-based active contour. An anterior lamina cribrosa surface segmentation method based on energy constraint is also proposed. In BMO detection, we initialize the Chan-Vese active contour model by using the segmentation map of the k-means cluster. In the segmentation of anterior lamina cribrosa surface, we utilize the energy function in each A-scan to establish a set of candidates. The points in the set that fail to meet the constraints are removed. Finally, we use the B-spline fitting method to obtain the results. The proposed automatic method can model the posterior laminar displacement by measuring the ALCSD. This method achieves a mean error of 45.34 μm in BMO detection. The mean errors of the anterior lamina cribrosa surface are 94.1% within five pixels and 76.1% within three pixels.


Journal of Computer Science and Technology | 2017

Supervised Vessels Classification Based on Feature Selection

Beiji Zou; Yao Chen; Chengzhang Zhu; Zailiang Chen; Ziqian Zhang

Arterial-venous classification of retinal blood vessels is important for the automatic detection of cardiovascular diseases such as hypertensive retinopathy and stroke. In this paper, we propose an arterial-venous classification (AVC) method, which focuses on feature extraction and selection from vessel centerline pixels. The vessel centerline is extracted after the preprocessing of vessel segmentation and optic disc (OD) localization. Then, a region of interest (ROI) is extracted around OD, and the most efficient features of each centerline pixel in ROI are selected from the local features, grey-level co-occurrence matrix (GLCM) features, and an adaptive local binary patten (A-LBP) feature by using a max-relevance and min-redundancy (mRMR) scheme. Finally, a feature-weighted K-nearest neighbor (FW-KNN) algorithm is used to classify the arterial-venous vessels. The experimental results on the DRIVE database and INSPIRE-AVR database achieve the high accuracy of 88.65% and 88.51% in ROI, respectively.


Archive | 2011

Method for extracting image region of interest by combining bottom-up and top-down ways

Zailiang Chen; Beiji Zou; Yixiong Liang; Hailan Shen; Lei Wang; Yao Xiang; Shenghui Liao; Guojiang Xin

Collaboration


Dive into the Zailiang Chen's collaboration.

Top Co-Authors

Avatar

Beiji Zou

Central South University

View shared research outputs
Top Co-Authors

Avatar

Qing Liu

Central South University

View shared research outputs
Top Co-Authors

Avatar

Chengzhang Zhu

Chinese Ministry of Education

View shared research outputs
Top Co-Authors

Avatar

Hailan Shen

Central South University

View shared research outputs
Top Co-Authors

Avatar

Kejuan Yue

Hunan First Normal University

View shared research outputs
Top Co-Authors

Avatar

Rongchang Zhao

Central South University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hao Wei

Central South University

View shared research outputs
Top Co-Authors

Avatar

Peng Peng

Central South University

View shared research outputs
Researchain Logo
Decentralizing Knowledge