Benzheng Wei
Shandong University of Traditional Chinese Medicine
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Benzheng Wei.
Neurocomputing | 2015
Shuangling Wang; Yilong Yin; Guibao Cao; Benzheng Wei; Yuanjie Zheng; Gongping Yang
Segmentation of retinal blood vessels is of substantial clinical importance for diagnoses of many diseases, such as diabetic retinopathy, hypertension and cardiovascular diseases. In this paper, the supervised method is presented to tackle the problem of retinal blood vessel segmentation, which combines two superior classifiers: Convolutional Neural Network (CNN) and Random Forest (RF). In this method, CNN performs as a trainable hierarchical feature extractor and ensemble RFs work as a trainable classifier. By integrating the merits of feature learning and traditional classifier, the proposed method is able to automatically learn features from the raw images and predict the patterns. Extensive experiments have been conducted on two public retinal images databases (DRIVE and STARE), and comparisons with other major studies on the same database demonstrate the promising performance and effectiveness of the proposed method. A supervised method based on feature and ensemble learning is proposed.The whole pipeline of the proposed method is automatic and trainable.Convolutional Neural Network performs as a trainable hierarchical feature extractor.Ensemble Random Forests work as a trainable classifier.Compared with state-of-the-arts, the experimental results are promising.
Scientific Reports | 2017
Zhongyi Han; Benzheng Wei; Yuanjie Zheng; Yilong Yin; Kejian Li; Shuo Li
Automated breast cancer multi-classification from histopathological images plays a key role in computer-aided breast cancer diagnosis or prognosis. Breast cancer multi-classification is to identify subordinate classes of breast cancer (Ductal carcinoma, Fibroadenoma, Lobular carcinoma, etc.). However, breast cancer multi-classification from histopathological images faces two main challenges from: (1) the great difficulties in breast cancer multi-classification methods contrasting with the classification of binary classes (benign and malignant), and (2) the subtle differences in multiple classes due to the broad variability of high-resolution image appearances, high coherency of cancerous cells, and extensive inhomogeneity of color distribution. Therefore, automated breast cancer multi-classification from histopathological images is of great clinical significance yet has never been explored. Existing works in literature only focus on the binary classification but do not support further breast cancer quantitative assessment. In this study, we propose a breast cancer multi-classification method using a newly proposed deep learning model. The structured deep learning model has achieved remarkable performance (average 93.2% accuracy) on a large-scale dataset, which demonstrates the strength of our method in providing an efficient tool for breast cancer multi-classification in clinical settings.
Computational and Mathematical Methods in Medicine | 2014
Shiyong Ji; Benzheng Wei; Zhen Yu; Gongping Yang; Yilong Yin
The medical image segmentation is the key approach of image processing for brain MRI images. However, due to the visual complex appearance of image structures and the imaging characteristic, it is still challenging to automatically segment brain MRI image. A new multi-stage segmentation method based on superpixel and fuzzy clustering (MSFCM) is proposed to achieve the good brain MRI segmentation results. The MSFCM utilizes the superpixels as the clustering objects instead of pixels, and it can increase the clustering granularity and overcome the influence of noise and bias effectively. In the first stage, the MRI image is parsed into several atomic areas, namely, superpixels, and a further parsing step is adopted for the areas with bigger gray variance over setting threshold. Subsequently, designed fuzzy clustering is carried out to the fuzzy membership of each superpixel, and an iterative broadcast method based on the Butterworth function is used to redefine their classifications. Finally, the segmented image is achieved by merging the superpixels which have the same classification label. The simulated brain database from BrainWeb site is used in the experiments, and the experimental results demonstrate that MSFCM method outperforms the traditional FCM algorithm in terms of segmentation accuracy and stability for MRI image.
Neurocomputing | 2017
Xiaodan Sui; Yuanjie Zheng; Benzheng Wei; Hongsheng Bi; Jianfeng Wu; Xuemei Pan; Yilong Yin; Shaoting Zhang
Examining choroid in Optical Coherence Tomography (OCT) plays a vital role in pathophysiologic factors of many ocular conditions. Among the existing approaches to detecting choroidal boundaries, graph-searching based techniques belong to the state-of-the-art. However, most of these techniques rely on hand-crafted models on the graph-edge weight and their performances are limited mainly due to the weak choroidal boundaries, textural structure of the choroid, inhomogeneity of the textural structure of the choroid and great variation of the choroidal thickness. In order to circumvent this limitation, we present a multi-scale and end-to-end convolutional network architecture where an optimal graph-edge weight can be learned directly from raw pixels. Our method operates on multiple scales and combines local and global information from the 2D OCT image. Experimental results obtained based on 912 OCT B-scans show that our learned graph-edge weights outperform conventional hand-crafted ones and behave robustly and accurately no matter the OCT image is from normal subjects or patients for whom significant retinal structure variations can be observed.
Biomedical Engineering Online | 2013
Shuangling Wang; Guibao Cao; Benzheng Wei; Yilong Yin; Gongping Yang; Chunming Li
BackgroundThe neuronal electron microscopy images segmentation is the basic and key step to efficiently build the 3D brain structure and connectivity for a better understanding of central neural system. However, due to the visual complex appearance of neuronal structures, it is challenging to automatically segment membranes from the EM images.MethodsIn this paper, we present a fast, efficient segmentation method for neuronal EM images that utilizes hierarchical level features based on supervised learning. Hierarchical level features are designed by combining pixel and superpixel information to describe the EM image. For pixels in a superpixel have similar characteristics, only part of them is automatically selected and used to reduce information redundancy. To each selected pixel, 34 dimensional features are extracted by traditional way. Each superpixel itself is viewed as a unit to extract 35 dimensional features with statistical method. Also, 3 dimensional context level features among multi superpixels are extracted. Above three kinds of features are combined as a feature vector, namely, hierarchical level features to use for segmentation. Random forest is used as classifier and is trained with hierarchical level features to perform segmentation.ResultsIn small sample condition and with low-dimensional features, the effectiveness of our method is verified on the data set of ISBI2012 EM Segmentation Challenge, and its rand error, warping error and pixel error attain to 0.106308715, 0.001200104 and 0.079132453, respectively.ConclusionsComparing to pixel level or superpixel level features, hierarchical level features have better discrimination ability and the proposed method is promising for membrane segmentation.
Bio-medical Materials and Engineering | 2014
Jinyu Cong; Benzheng Wei; Yilong Yin; Xiaoming Xi; Yuanjie Zheng
Simple Linear Iterative Clustering (SLIC) algorithm is increasingly applied to different kinds of image processing because of its excellent perceptually meaningful characteristics. In order to better meet the needs of medical image processing and provide technical reference for SLIC on the application of medical image segmentation, two indicators of boundary accuracy and superpixel uniformity are introduced with other indicators to systematically analyze the performance of SLIC algorithm, compared with Normalized cuts and Turbopixels algorithm. The extensive experimental results show that SLIC is faster and less sensitive to the image type and the setting superpixel number than other similar algorithms such as Turbopixels and Normalized cuts algorithms. And it also has a great benefit to the boundary recall, the robustness of fuzzy boundary, the setting superpixel size and the segmentation performance on medical image segmentation.
Journal of Biomimetics, Biomaterials, and Tissue Engineering | 2013
Guibao Cao; Shuangling Wang; Benzheng Wei; Yilong Yin; Gongping Yang
To get new insights into the function and structure of the brain,neuroanatomists need to build 3D reconstructions of brain tissue from electron microscopy (EM) images. One key step towards this is to get automatic segmentation of neuronal structures depicted in stacks of electron microscopy images. However, due to the visual complex appearance of neuronal structures, it is challenging to automatically segment membranes in the EM images. Based on Convolutional Neural Network (CNN) and Random Forest classifier (RF), a hybrid CNN-RF method for EM neuron segmentation is presented. CNN as a feature extractor is trained firstly, and then well behaved features are learned with the trained feature extractor automatically. Finally, Random Forest classifier is trained on the learned features to perform neuron segmentation. Experiments have been conducted on the benchmarks for the ISBI2012 EM Segmentation Challenge, and the proposed method achieves the effectiveness results: The Rand error, Warping error and Pixel error attains to 0.109388991, 0.001455688 and 0.072129307, respectively.
medical image computing and computer-assisted intervention | 2018
Zhongyi Han; Benzheng Wei; Stephanie Leung; Jonathan Chung; Shuo Li
The objective of this work is to automatically generate unified reports of lumbar spinal MRIs in the field of radiology, i.e., given an MRI of a lumbar spine, directly generate a radiologist-level report to support clinical decision making. We show that this can be achieved via a weakly supervised framework that combines deep learning and symbolic program synthesis theory to overcome four inevitable tasks: semantic segmentation, radiological classification, positional labeling, and structural captioning. The weakly supervised framework using object level annotations without requiring radiologist-level report annotations to generate unified reports. Each generated report covers almost type lumbar structures comprised of six intervertebral discs, six neural foramina, and five lumbar vertebrae. The contents of each report contain the exact locations and pathological correlations of these lumbar structures as well as their normalities in terms of three type relevant spinal diseases: intervertebral disc degeneration, neural foraminal stenosis, and lumbar vertebrae deformities. This framework is applied to a large corpus of T1/T2-weighted sagittal MRIs of 253 subjects acquired from multiple vendors. Extensive experiments demonstrate that the framework is able to generate unified radiological reports, which reveals its effectiveness and potential as a clinical tool to relieve spinal radiologists from laborious workloads to a certain extent, such that contributes to relevant time savings and expedites the initiation of many specific therapies.
medical image computing and computer assisted intervention | 2018
Fang Yan; Jia Cui; Yu Wang; Hong Liu; Hui Liu; Benzheng Wei; Yilong Yin; Yuanjie Zheng
This paper presents a deep random walk technique for drusen segmentation from fundus images. It is formulated as a deep learning architecture which learns deep representations from fundus images and specify an optimal pixel-pixel affinity. Specifically, the proposed architecture is mainly composed of three parts: a deep feature extraction module to learn both semantic-level and low-level representation of image, an affinity learning module to get pixel-pixel affinities for formulating the transition matrix of random walk and a random walk module which propagates manual labels. The power of our technique comes from the fact that the learning procedures for deep image representations and pixel-pixel affinities are driven by the random walk process. The accuracy of our proposed algorithm surpasses state-of-the-art drusen segmentation techniques as validated on the public STARE and DRIVE databases.
Medical Image Analysis | 2018
Zhongyi Han; Benzheng Wei; Ashley Mercado; Stephanie Leung; Shuo Li
&NA; Spinal clinicians still rely on laborious workloads to conduct comprehensive assessments of multiple spinal structures in MRIs, in order to detect abnormalities and discover possible pathological factors. The objective of this work is to perform automated segmentation and classification (i.e., normal and abnormal) of intervertebral discs, vertebrae, and neural foramen in MRIs in one shot, which is called semantic segmentation that is extremely urgent to assist spinal clinicians in diagnosing neural foraminal stenosis, disc degeneration, and vertebral deformity as well as discovering possible pathological factors. However, no work has simultaneously achieved the semantic segmentation of intervertebral discs, vertebrae, and neural foramen due to three‐fold unusual challenges: 1) Multiple tasks, i.e., simultaneous semantic segmentation of multiple spinal structures, are more difficult than individual tasks; 2) Multiple targets: average 21 spinal structures per MRI require automated analysis yet have high variety and variability; 3) Weak spatial correlations and subtle differences between normal and abnormal structures generate dynamic complexity and indeterminacy. In this paper, we propose a Recurrent Generative Adversarial Network called Spine‐GAN for resolving above‐aforementioned challenges. Firstly, Spine‐GAN explicitly solves the high variety and variability of complex spinal structures through an atrous convolution (i.e., convolution with holes) autoencoder module that is capable of obtaining semantic task‐aware representation and preserving fine‐grained structural information. Secondly, Spine‐GAN dynamically models the spatial pathological correlations between both normal and abnormal structures thanks to a specially designed long short‐term memory module. Thirdly, Spine‐GAN obtains reliable performance and efficient generalization by leveraging a discriminative network that is capable of correcting predicted errors and global‐level contiguity. Extensive experiments on MRIs of 253 patients have demonstrated that Spine‐GAN achieves high pixel accuracy of 96.2%, Dice coefficient of 87.1%, Sensitivity of 89.1% and Specificity of 86.0%, which reveals its effectiveness and potential as a clinical tool.