Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zengchang Qin is active.

Publication


Featured researches published by Zengchang Qin.


Neurocomputing | 2017

Automated grading of breast cancer histopathology using cascaded ensemble with combination of multi-level image features

Tao Wan; Jiajia Cao; Jianhui Chen; Zengchang Qin

We present a novel image-analysis based method for automatically distinguishing low, intermediate, and high grades of breast cancer in digitized histopathology. A multiple level feature set, including pixel-, object-, and semantic-level features derived from convolutional neural networks (CNN), is extracted from 106 hematoxylin and eosin stained breast biopsy tissue studies from 106 women patients. These multi-level features allow not only characterization of cancer morphology, but also extraction of structural and interpretable information within the histopathological images. In this study, an improved hybrid active contour model based segmentation method was used to segment nuclei from the images. The semantic-level features were extracted by a CNN approach, which described the proportions of nuclei belonging to the different grades, in conjunction with pixel-level (texture) and object-level (architecture) features, to create an integrated set of image attributes that can potentially outperform either subtype of features individually. We utilized a cascaded approach to train multiple support vector machine (SVM) classifiers using combinations of feature subtypes to enable the possibility of maximizing the performance by leveraging different feature sets extracted from multiple levels. Final class (cancer grade) was determined by combining the scores produced by the individual SVM classifiers. By employing a light (three-layer) CNN model and parallel computing, the presented approach is computationally efficient and applicable to large-scale datasets. The method achieved an accuracy of 0.92 for low versus high, 0.77 for low versus intermediate, and 0.76 for intermediate versus high, and an overall accuracy of 0.69 when discriminating low, intermediate, and high grades of histopathological breast cancer images. This suggested that our grading method could be useful in developing a computational diagnostic tool for differentiating breast cancer grades, which might enable objective and reproducible alternative for diagnosis. HighlightsAn automated breast cancer grading method in histopathology is presented.Multi-level features are extracted to capture histomorphometric attributes in histology.The grading method greatly improves the breast cancer grading performance.


international conference on image processing | 2014

Wavelet-based statistical features for distinguishing mitotic and non-mitotic cells in breast cancer histopathology

Tao Wan; Xu Liu; Jianhui Chen; Zengchang Qin

To diagnose breast cancer (BCa), the number of mitotic cells present in tissue sections is an important parameter to examine and grade breast biopsy specimen. The differentiation of mitotic from non-mitotic cells in breast histopathological images is a crucial step for automatical mitosis detection. This work aims at improving the accuracy of mitosis classification by characterizing objects of interest (tissue cells) in wavelet based multi-resolution representations that better capture the statistical features having mitosis discrimination. A dual-tree complex wavelet transform (DT-CWT) is performed to decompose the image patches into multi-scale forms. Five commonly-used statistical features are extracted on each wavelet subband. Since both mitotic and non-mitotic cells appear as small objects with a large variety of shapes in the images, characterization of mitosis is a challenging problem. The inter-scale dependencies of wavelet coefficients allow extraction of important texture features within the cells that are more likely to appear at all different scales. The wavelet-based statistical features were evaluated on a dataset containing 327 mitotic and 406 non-mitotic cells via a support vector machine classifier in iterative cross-validation. The quantitative results showed that our DT-CWT based approach achieved superior classification performance with the accuracy of 87.94%, sensitivity of 86.80%, specificity of 89.89%, and the area under the curve (AUC) value of 0.94.


Neurocomputing | 2018

Auto-painter: Cartoon image generation from sketch by using conditional Wasserstein generative adversarial networks

Yifan Liu; Zengchang Qin; Tao Wan; Zhenbo Luo

Abstract Recently, realistic image generation using deep neural networks has become a hot topic in machine learning and computer vision. Such an image can be generated at pixel level by learning from a large collection of images. Learning to generate colorful cartoon images from black-and-white sketches is not only an interesting research problem, but also a useful application in digital entertainment. In this paper, we investigate the sketch-to-image synthesis problem by using conditional generative adversarial networks (cGAN). We propose a model called auto-painter which can automatically generate compatible colors given a sketch. Wasserstein distance is used in training cGAN to overcome model collapse and enable the model converged much better. The new model is not only capable of painting hand-draw sketch with compatible colors, but also allowing users to indicate preferred colors. Experimental results on different sketch datasets show that the auto-painter performs better than other existing image-to-image methods.


international symposium on biomedical imaging | 2016

An automatic breast cancer grading method in histopathological images based on pixel-, object-, and semantic-level features

Jiajia Cao; Zengchang Qin; Juan Jing; Jianhui Chen; Tao Wan

We present an automatic breast cancer grading method in histopathological images based on the computer extracted pixel-, object-, and semantic-level features derived from convolutional neural networks (CNN). The multiple level features allow not only characterization of nuclear polymorphism, but also extraction of structural and interpretable information within the images. In this study, a hybrid level set based segmentation method was used to segment nuclei from the images. A quantile normalization approach was utilized to improve image color consistency. The semantic level features are extracted by a CNN approach, which describe the proportions of nuclei belonging to the different grades, in conjunction with pixel-level (texture) and object-level (structure) features, to form an integrated set of attributes. A support vector machine classifier was trained to discriminate the breast cancer between low, intermediate, and high grades. The results demonstrated that our method achieved accuracy of 0.90 (low vs. high), and 0.74 (low vs. intermediate), and 0.76 (intermediate vs. high), suggesting that the present method could play a fundamental role in developing a computer-aided breast cancer grading system.


international conference on acoustics, speech, and signal processing | 2013

Color saliency model based on mean shift segmentation

Xu Liu; Zengchang Qin; Xiaofan Zhang; Tao Wan

Saliency detection is one of the extraordinary capabilities of the human visual system (HVS). In this paper, we present a novel saliency detection model to capture visual selective attention of images. The new model does not require prior knowledge of salient regions as well as manual labeling. The mean shift segmentation algorithm and quaternion discrete cosine transform (QDCT) are used to generate a rough saliency map by integrating low-level features and spatial saliency information. In each segmented region, the color saliency is measured based on the probability of its occurrences in foreground and background defined by the rough saliency map. The experimental results on a widely used benchmark database demonstrated that the presented model achieves the best performance in terms of visual and quantitative evaluations compared to existing state-of-the-art saliency detection models.


international symposium on biomedical imaging | 2016

An improved hybrid active contour model for nuclear segmentation on breast cancer histopathology

Juan Jing; Tao Wan; Jiajia Cao; Zengchang Qin

Segmentation of nuclei on breast cancer histopathological images is considered a basic and essential step for diagnosis in a computer-aided diagnosis framework. Nuclear segmentation remains a challenging problem due to the inherent diversity of cancer biology and the variability of the tissue appearance. We present an automatic nuclear segmentation method using an improved hybrid active contour (AC) model driven by both boundary and region information. The initialization of segmentation based on morphological operations and watershed allows for generation of initial closed curves and reduction in computational load of curve evolution for the AC model. Color gradients are computed to capture image gradients along the margin of nucleus. The AC segmentation scheme is performed in a coarse-to-fine fashion which can help to solve the problem of multiple object overlap in an image scene. Segmentation performance was evaluated on various breast cancer histopathological images with different grades and was compared with the existing popular AC models, suggesting that our improved hybrid active contour model can be used to build an accurate and robust nuclear segmentation tool.


visual communications and image processing | 2013

Salient object detection in image sequences via spatial-temporal cue

Chuang Gan; Zengchang Qin; Jia Xu; Tao Wan

Contemporary video search and categorization are non-trivial tasks due to the massively increasing amount and content variety of videos. We put forward the study of visual saliency models in video. Such a model is employed to identify salient objects from the image background. Starting from the observation that motion information in video often attracts more human attention compared to static images, we devise a region contrast based saliency detection model using spatial-temporal cues (RCST). We introduce and study four saliency principles to realize the RCST. This generalizes the previous static image for saliency computational model to video. We conduct experiments on a publicly available video segmentation database where our method significantly outperforms seven state-of-the-art methods with respect to PR curve, ROC curve and visual comparison.


Computer Methods and Programs in Biomedicine | 2018

Automated Identification and Grading of Coronary Artery Stenoses with X-ray Angiography

Tao Wan; Hongxiang Feng; Chao Tong; Deyu Li; Zengchang Qin

BACKGROUND AND OBJECTIVE X-ray coronary angiography (XCA) remains the gold standard imaging technique for the diagnosis and treatment of cardiovascular disease. Automatic detection and grading of coronary stenoses in XCA are challenging problems due to the complex overlap of different background structures with intensity inhomogeneities. We present a new computerized image based method to accurately identify and quantify the stenosis severity on XCA. METHODS A unified framework, consisting of Hessian-based vessel enhancement, level-set skeletonization, improved measure of match measurement, and local extremum identification, is developed to distinctly reveal the vessel structures and accurately determine the stenosis grades. The methodology was validated on 143 consecutive patients who underwent diagnostic XCA through both qualitative and quantitative evaluations. RESULTS The presented algorithm was tested on a set of 267 vessel segments annotated by two expert cardiologists. The experimental results show that the method can effectively localize and quantify the vessel stenoses, achieving average detection accuracy, sensitivity, specificity, and F-score of 93.93%, 91.03%, 93.83%, 89.18%, respectively. CONCLUSIONS A fully automatic coronary analysis method is devised for vessel stenosis detection and grading in XCA. The presented approach can potentially serve as a generalized framework to handle different image modalities.


Computer Methods and Programs in Biomedicine | 2018

Automated coronary artery tree segmentation in X-ray angiography using improved Hessian based enhancement and statistical region merging

Tao Wan; Xiaoqing Shang; Weilin Yang; Jianhui Chen; Deyu Li; Zengchang Qin

BACKGROUND AND OBJECTIVE Coronary artery segmentation is a fundamental step for a computer-aided diagnosis system to be developed to assist cardiothoracic radiologists in detecting coronary artery diseases. Manual delineation of the vasculature becomes tedious or even impossible with a large number of images acquired in the daily life clinic. A new computerized image-based segmentation method is presented for automatically extracting coronary arteries from angiography images. METHODS A combination of a multiscale-based adaptive Hessian-based enhancement method and a statistical region merging technique provides a simple and effective way to improve the complex vessel structures as well as thin vessel delineation which often missed by other segmentation methods. The methodology was validated on 100 patients who underwent diagnostic coronary angiography. The segmentation performance was assessed via both qualitative and quantitative evaluations. RESULTS Quantitative evaluation shows that our method is able to identify coronary artery trees with an accuracy of 93% and outperforms other segmentation methods in terms of two widely used segmentation metrics of mean absolute difference and dice similarity coefficient. CONCLUSIONS The comparison to the manual segmentations from three human observers suggests that the presented automated segmentation method is potential to be used in an image-based computerized analysis system for early detection of coronary artery disease.


visual communications and image processing | 2013

What color is an object

Xiaofan Zhang; Zengchang Qin; Xu Liu; Tao Wan

Color perception is one of the major cognitive abilities of human being. Color information is also one of the most important features in various computer vision tasks including object recognition, tracking, scene classification and so on. In this paper, we proposed a simple and effective method for learning color composition of objects from large annotated datasets. The new proposed model is based on a region-based bag-of-colors model and saliency detection. The effectiveness of the model is empirically verified on manually labelled datasets with single or multiple tags. The significance of this research is that the color information of an object can provide useful prior knowledge to help improving the existing computer vision models in image segmentation, object recognition and tracking.

Collaboration


Dive into the Zengchang Qin's collaboration.

Top Co-Authors

Avatar

Tao Wan

Case Western Reserve University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tao Wan

Case Western Reserve University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge