Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yun Zhou is active.

Publication


Featured researches published by Yun Zhou.


IEEE Transactions on Medical Imaging | 2013

Feature-Based Image Patch Approximation for Lung Tissue Classification

Yang Song; Weidong Cai; Yun Zhou; David Dagan Feng

In this paper, we propose a new classification method for five categories of lung tissues in high-resolution computed tomography (HRCT) images, with feature-based image patch approximation. We design two new feature descriptors for higher feature descriptiveness, namely the rotation-invariant Gabor-local binary patterns (RGLBP) texture descriptor and multi-coordinate histogram of oriented gradients (MCHOG) gradient descriptor. Together with intensity features, each image patch is then labeled based on its feature approximation from reference image patches. And a new patch-adaptive sparse approximation (PASA) method is designed with the following main components: minimum discrepancy criteria for sparse-based classification, patch-specific adaptation for discriminative approximation, and feature-space weighting for distance computation. The patch-wise labelings are then accumulated as probabilistic estimations for region-level classification. The proposed method is evaluated on a publicly available ILD database, showing encouraging performance improvements over the state-of-the-arts.


international conference on control, automation, robotics and vision | 2014

Medical image classification with convolutional neural network

Qing Li; Weidong Cai; Xiaogang Wang; Yun Zhou; David Dagan Feng; Mei Chen

Image patch classification is an important task in many different medical imaging applications. In this work, we have designed a customized Convolutional Neural Networks (CNN) with shallow convolution layer to classify lung image patches with interstitial lung disease (ILD). While many feature descriptors have been proposed over the past years, they can be quite complicated and domain-specific. Our customized CNN framework can, on the other hand, automatically and efficiently learn the intrinsic image features from lung image patches that are most suitable for the classification purpose. The same architecture can be generalized to perform other medical image or texture classification tasks.


IEEE Transactions on Biomedical Engineering | 2014

Lung Nodule Classification With Multilevel Patch-Based Context Analysis

Fan Zhang; Yang Song; Weidong Cai; Min-Zhao Lee; Yun Zhou; Heng Huang; Shimin Shan; Michael J. Fulham; David Dagan Feng

In this paper, we propose a novel classification method for the four types of lung nodules, i.e., well-circumscribed, vascularized, juxta-pleural, and pleural-tail, in low dose computed tomography scans. The proposed method is based on contextual analysis by combining the lung nodule and surrounding anatomical structures, and has three main stages: an adaptive patch-based division is used to construct concentric multilevel partition; then, a new feature set is designed to incorporate intensity, texture, and gradient information for image patch feature description, and then a contextual latent semantic analysis-based classifier is designed to calculate the probabilistic estimations for the relevant images. Our proposed method was evaluated on a publicly available dataset and clearly demonstrated promising classification performance.


IEEE Transactions on Medical Imaging | 2015

Large Margin Local Estimate With Applications to Medical Image Classification

Yang Song; Weidong Cai; Heng Huang; Yun Zhou; David Dagan Feng; Yue Wang; Michael J. Fulham; Mei Chen

Medical images usually exhibit large intra-class variation and inter-class ambiguity in the feature space, which could affect classification accuracy. To tackle this issue, we propose a new Large Margin Local Estimate (LMLE) classification model with sub-categorization based sparse representation. We first sub-categorize the reference sets of different classes into multiple clusters, to reduce feature variation within each subcategory compared to the entire reference set. Local estimates are generated for the test image using sparse representation with reference subcategories as the dictionaries. The similarity between the test image and each class is then computed by fusing the distances with the local estimates in a learning-based large margin aggregation construct to alleviate the problem of inter-class ambiguity. The derived similarities are finally used to determine the class label. We demonstrate that our LMLE model is generally applicable to different imaging modalities, and applied it to three tasks: interstitial lung disease (ILD) classification on high-resolution computed tomography (HRCT) images, phenotype binary classification and continuous regression on brain magnetic resonance (MR) imaging. Our experimental results show statistically significant performance improvements over existing popular classifiers.


Medical Image Analysis | 2015

Locality-constrained Subcluster Representation Ensemble for lung image classification

Yang Song; Weidong Cai; Heng Huang; Yun Zhou; Yue Wang; David Dagan Feng

In this paper, we propose a new Locality-constrained Subcluster Representation Ensemble (LSRE) model, to classify high-resolution computed tomography (HRCT) images of interstitial lung diseases (ILDs). Medical images normally exhibit large intra-class variation and inter-class ambiguity in the feature space. Modelling of feature space separation between different classes is thus problematic and this affects the classification performance. Our LSRE model tackles this issue in an ensemble classification construct. The image set is first partitioned into subclusters based on spectral clustering with approximation-based affinity matrix. Basis representations of the test image are then generated with sparse approximation from the subclusters. These basis representations are finally fused with approximation- and distribution-based weights to classify the test image. Our experimental results on a large HRCT database show good performance improvement over existing popular classifiers.


digital image computing techniques and applications | 2013

Context Curves for Classification of Lung Nodule Images

Fan Zhang; Yang Song; Weidong Cai; Yun Zhou; Shimin Shan; Dagan Feng

In this paper, a feature-based imaging classification method is presented to classify the lung nodules in low dose computed tomography (LDCT) slides into four categories: well-circumscribed, vascularized, juxta-pleural and pleural-tail. The proposed method focuses on the feature design, which describes both lung nodule and surrounding context information, and contains two main stages: (1) superpixel labeling, which labels the pixels into foreground and background based on an image patch division approach, (2) context curve calculation, which transfers the superpixel labeling result into feature vector. While the first stage preprocesses the image, extracting the major context anatomical structures for each type of nodules, the context curve provides a discriminative description for intra- and inter-type nodules. The evaluation is conducted on a publicly available dataset and the results indicate the promising performance of the proposed method on lung nodule classification.


The Journal of Nuclear Medicine | 2013

Biodistribution and Radiation Dosimetry of 18F-CP-18, a Potential Apoptosis Imaging Agent, as Determined from PET/CT Scans in Healthy Volunteers

Yang Song; Weidong Cai; Yun Zhou; Lingfeng Wen; Dagan Feng

18F-CP-18, or (18S,21S,24S,27S,30S)-27-(2-carboxyethyl)-21-(carboxymethyl)-30-((2S,3R,4R,5R,6S)-6-((2-(4-(3-F18-fluoropropyl)-1H-1,2,3-triazol-1-yl)acetamido)methyl)-3,4,5-trihydroxytetrahydro-2H-pyran-2-carboxamido)-24-isopropyl-18-methyl-17,20,23,26,29-pentaoxo-4,7,10,13-tetraoxa-16,19,22,25,28-pentaazadotriacontane-1,32-dioic acid, is being evaluated as a tissue apoptosis marker for PET imaging. The purpose of this study was to determine the biodistribution and estimate the normal-organ radiation-absorbed doses and effective dose from 18F-CP-18. Methods: Successive whole-body PET/CT scans were obtained at approximately 7, 45, 90, 130, and 170 min after intravenous injection of 18F-CP-18 in 7 healthy human volunteers. Blood samples and urine were collected between the PET/CT scans, and the biostability of 18F-CP-18 was assessed using high-performance liquid chromatography. The PET scans were analyzed to determine the radiotracer uptake in different organs. OLINDA/EXM software was used to calculate human radiation doses based on the biodistribution of the tracer. Results: 18F-CP-18 was 54% intact in human blood at 135 min after injection. The tracer cleared rapidly from the blood pool with a half-life of approximately 30 min. Relatively high 18F-CP-18 uptake was observed in the kidneys and bladder, with diffuse uptake in the liver and heart. The mean standardized uptake values (SUVs) in the bladder, kidneys, heart, and liver at around 50 min after injection were approximately 65, 6, 1.5, and 1.5, respectively. The calculated effective dose was 38 ± 4 μSv/MBq, with the urinary bladder wall having the highest absorbed dose at 536 ± 61 μGy/MBq using a 4.8-h bladder-voiding interval for the male phantom. For a 1-h voiding interval, these doses were reduced to 15 ± 2 μSv/MBq and 142 ± 15 μGy/MBq, respectively. For a typical injected activity of 555 MBq, the effective dose would be 21.1 ± 2.2 mSv for the 4.8-h interval, reduced to 8.3 ± 1.1 mSv for the 1-h interval. Conclusion: 18F-CP-18 cleared rapidly through the renal system. The urinary bladder wall received the highest radiation dose and was deemed the critical organ. Both the effective dose and the bladder dose can be reduced by frequent voiding. From the radiation dosimetry perspective, the apoptosis imaging agent 18F-CP-18 is suitable for human use.


IEEE Transactions on Medical Imaging | 2014

Lesion Detection and Characterization With Context Driven Approximation in Thoracic FDG PET-CT Images of NSCLC Studies

Yang Song; Weidong Cai; Heng Huang; Xiaogang Wang; Yun Zhou; Michael J. Fulham; David Dagan Feng

We present a lesion detection and characterization method for 18F-fluorodeoxyglucose positron emission tomography-computed tomography (FDG PET-CT) images of the thorax in the evaluation of patients with primary nonsmall cell lung cancer (NSCLC) with regional nodal disease. Lesion detection can be difficult due to low contrast between lesions and normal anatomical structures. Lesion characterization is also challenging due to similar spatial characteristics between the lung tumors and abnormal lymph nodes. To tackle these problems, we propose a context driven approximation (CDA) method. There are two main components of our method. First, a sparse representation technique with region-level contexts was designed for lesion detection. To discriminate low-contrast data with sparse representation, we propose a reference consistency constraint and a spatial consistent constraint. Second, a multi-atlas technique with image-level contexts was designed to represent the spatial characteristics for lesion characterization. To accommodate inter-subject variation in a multi-atlas model, we propose an appearance constraint and a similarity constraint. The CDA method is effective with a simple feature set, and does not require parametric modeling of feature space separation. The experiments on a clinical FDG PET-CT dataset show promising performance improvement over the state-of-the-art.


international symposium on biomedical imaging | 2013

Pathology-centric medical image retrieval with hierarchical contextual spatial descriptor

Yang Song; Weidong Cai; Yun Zhou; Lingfeng Wen; David Dagan Feng

Content-based image retrieval has been suggested as an aid to medical diagnosis. Techniques based on standard feature descriptors, however, might not represent optimally the pathological characteristics in medical images. In this paper, we propose a new approach for medical image retrieval based on pathology-centric feature extraction and representation; and patch-based local feature extraction and hierarchical contextual spatial descriptor are designed. The proposed method is evaluated on positron emission tomography - computed tomography (PET-CT) images from subjects with non-small cell lung cancer (NSCLC), showing promising performance improvements over the other benchmarked techniques.


medical image computing and computer assisted intervention | 2013

Discriminative Data Transform for Image Feature Extraction and Classification

Yang Song; Weidong Cai; Seungil Huh; Mei Chen; Takeo Kanade; Yun Zhou; Dagan Feng

Good feature design is important to achieve effective image classification. This paper presents a novel feature design with two main contributions. First, prior to computing the feature descriptors, we propose to transform the images with learning-based filters to obtain more representative feature descriptors. Second, we propose to transform the computed descriptors with another set of learning-based filters to further improve the classification accuracy. In this way, while generic feature descriptors are used, data-adaptive information is integrated into the feature extraction process based on the optimization objective to enhance the discriminative power of feature descriptors. The feature design is applicable to different application domains, and is evaluated on both lung tissue classification in high-resolution computed tomography (HRCT) images and apoptosis detection in time-lapse phase contrast microscopy image sequences. Both experiments show promising performance improvements over the state-of-the-art.

Collaboration


Dive into the Yun Zhou's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Heng Huang

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar

Michael J. Fulham

Royal Prince Alfred Hospital

View shared research outputs
Top Co-Authors

Avatar

Fan Zhang

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar

Mei Chen

State University of New York System

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shimin Shan

Dalian University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge