Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zhong Xue is active.

Publication


Featured researches published by Zhong Xue.


Proceedings of SPIE | 2012

A novel approach for three dimensional dendrite spine segmentation and classification

Tiancheng He; Zhong Xue; Stephen T. C. Wong

Dendritic spines are small, bulbous cellular compartments that carry synapses. Biologists have been studying the biochemical and genetic pathways by examining the morphological changes of the dendritic spines at the intracellular level. Automatic dendritic spine detection from high resolution microscopic images is an important step for such morphological studies. In this paper, a novel approach to automated dendritic spine detection is proposed based on a nonlinear degeneration model. Dendritic spines are recognized as small objects with variable shapes attached to dendritic backbones. We explore the problem of dendritic spine detection from a different angle, i.e., the nonlinear degeneration equation (NDE) is utilized to enhance the morphological differences between the dendrite and spines. Using NDE, we simulated degeneration for dendritic spine detection. Based on the morphological features, the shrinking rate on dendrite pixels is different from that on spines, so that spines can be detected and segmented after degeneration simulation. Then, to separate spines into different types, Gaussian curvatures were employed, and the biomimetic pattern recognition theory was applied for spine classification. In the experiments, we compared quantitatively the spine detection accuracy with previous methods, and the results showed the accuracy and superiority of our methods.


ieee embs international conference on biomedical and health informatics | 2016

When discriminative K-means meets Grassmann manifold: Disease gene identification via a general multi-view clustering method

Danping Li; Lei Wang; Zhong Xue; Stephen T. C. Wong

Understanding the role of genetics in diseases is a challenging process that has multiple applications within functional genomics and precision medicine. In this paper, we present a general clustering method to identify disease genes under a multi-view setting. First, by incorporating the graph Laplacian of spectral clustering (SC) into the discriminative K-means, we obtain a single-view subspace representation, which is endowed with both discriminant power and geometrical structure information of that data layer. Then, integrating these individual subspaces together on the Grassmann manifold, we can further find a unified low-dimensional representation under the multi-view SC framework. The proposed two-stage method generalizes the single-view discriminative K-means and the multi-view Grassmann clustering, and can directly handle the case where both attribute-based data and interaction-based networks are available, which is extremely useful in biological research. As a case study of disease gene identification, we apply this method to a benchmark dataset that contains nine gene-by-term text profiles. Experimental results show that our method provides competitive results compared to the state-of-art clustering methods, including a similar one that fuses multiple kernels and Laplacians.


medical image computing and computer assisted intervention | 2014

Estimating Dynamic Lung Images from High-Dimension Chest Surface Motion Using 4D Statistical Model

Tiancheng He; Zhong Xue; Nam Yu; Paige Nitsch; Bin S. Teh; Stephen T. C. Wong

Computed Tomography (CT) has been widely used in image-guided procedures such as intervention and radiotherapy of lung cancer. However, due to poor reproducibility of breath holding or respiratory cycles, discrepancies between static images and patients current lung shape and tumor location could potentially reduce the accuracy for image guidance. Current methods are either using multiple intra-procedural scans or monitoring respiratory motion with tracking sensors. Although intra-procedural scanning provides more accurate information, it increases the radiation dose and still only provides snapshots of patients chest. Tracking-based breath monitoring techniques can effectively detect respiratory phases but have not yet provided accurate tumor shape and location due to low dimensional signals. Therefore, estimating the lung motion and generating dynamic CT images from real-time captured high-dimensional sensor signals acts as a key component for image-guided procedures. This paper applies a principal component analysis (PCA)-based statistical model to establish the relationship between lung motion and chest surface motion from training samples, on a template space, and then uses this model to estimate dynamic images for a new patient from the chest surface motion. Qualitative and quantitative results showed that the proposed high-dimensional estimation algorithm yielded more accurate 4D-CT compared to fiducial marker-based estimation.


ieee embs international conference on biomedical and health informatics | 2017

Transductive local fisher discriminant analysis for gene expression profile-based cancer classification

Danping Li; Lei Wang; Jiajun Wang; Zhong Xue; Stephen T. C. Wong

Gene expression profiles provide hidden biological knowledge and key information that can be used to distinguish different types of cancer. Due to their high dimensionality and redundancy, gene expression data are often preprocessed by dimensionality reduction (DR) methods. Conventional supervised DR methods use only labeled samples to train the model, leading to a limited performance due to small number of labeled samples in the real world. This paper proposes a transductive local Fisher discriminant analysis (TLFDA) method that uses the available unlabeled data in the learning process. On the one hand, the label information is utilized to maximize the inter-class distance in the embedding space. On the other hand, the local structural information of all data samples is taken into consideration to maintain the smoothness property. In this way, the TLFDA provides more discriminative power than state-of-the-art supervised or semi-supervised DR methods, even when the number of labeled samples is very limited. Our experimental results on benchmark GCM and Acute Leukemia datasets show its promising performance on gene expression profile-based cancer classification.


Proceedings of SPIE | 2012

Live-Wire-Based Segmentation of 3D Anatomical Structures for Image-Guided Lung Interventions

Kongkuo Lu; Sheng Xu; Zhong Xue; Stephen T. C. Wong

Computed Tomography (CT) has been widely used for assisting in lung cancer detection/diagnosis and treatment. In lung cancer diagnosis, suspect lesions or regions of interest (ROIs) are usually analyzed in screening CT scans. Then, CT-based image-guided minimally invasive procedures are performed for further diagnosis through bronchoscopic or percutaneous approaches. Thus, ROI segmentation is a preliminary but vital step for abnormality detection, procedural planning, and intra-procedural guidance. In lung cancer diagnosis, such ROIs can be tumors, lymph nodes, nodules, etc., which may vary in size, shape, and other complication phenomena. Manual segmentation approaches are time consuming, user-biased, and cannot guarantee reproducible results. Automatic methods do not require user input, but they are usually highly application-dependent. To counterbalance among efficiency, accuracy, and robustness, considerable efforts have been contributed to semi-automatic strategies, which enable full user control, while minimizing human interactions. Among available semi-automatic approaches, the live-wire algorithm has been recognized as a valuable tool for segmentation of a wide range of ROIs from chest CT images. In this paper, a new 3D extension of the traditional 2D live-wire method is proposed for 3D ROI segmentation. In the experiments, the proposed approach is applied to a set of anatomical ROIs from 3D chest CT images, and the results are compared with the segmentation derived from a previous evaluated live-wire-based approach.


medical image computing and computer assisted intervention | 2017

Dynamic Respiratory Motion Estimation Using Patch-Based Kernel-PCA Priors for Lung Cancer Radiotherapy

Tiancheng He; Ramiro Pino; Bin S. Teh; Stephen T. C. Wong; Zhong Xue

In traditional radiation therapy of lung cancer, the planned target volume (PTV) is delineated from the average or a single phase of the planning-4D-CT, which is then registered to the intra-procedural 3D-CT for delivery of radiation dose. Because of respiratory motion, the radiation needs to be gated so that the PTV covers the tumor. 4D planning deals with multiple breathing phases, however, since the breathing patterns during treatment can change, there are matching discrepancies between the planned 4D volumes and the actual tumor shape and position. Recent works showed that it is promising to dynamically estimate the lung motion from chest motion. In this paper, we propose a patch-based Kernel-PCA model for estimating lung motion from the chest and upper abdomen motion. First, a statistical model is established from the 4D motion fields of a population. Then, the lung motion of a patient is estimated dynamically based on the patient’s 4D-CT image and chest and upper abdomen motion, using population’s statistical model as prior knowledge. This lung motion estimation algorithm aims to adapt the patient’s planning 4D-CT to his/her current breathing status dynamically during treatment so that the location and shape of the lung tumor can be precisely tracked. Thus, it reduces possible damage to surrounding normal tissue, reduces side-effects, and improves the efficiency of radiation therapy. In experiments, we used the leave-one-out method to evaluate the estimation accuracy from images of 51 male subjects and compared the linear and nonlinear estimation scenarios. The results showed smaller lung field matching errors for the proposed patch-based nonlinear estimation.


international conference on pattern recognition | 2016

Manifold Regularized Multi-view Subspace Clustering for image representation

Lei Wang; Danping Li; Tiancheng He; Zhong Xue

Subspace clustering refers to the task of clustering a collection of points drawn from a high-dimensional space into a union of multiple subspaces that best fits them. State-of-the-art approaches have been proposed for tackling this clustering problem by using the low-rank or sparse optimization techniques. However, most of the traditional subspace clustering methods are developed for single-view data and are not directly applicable to the multi-view scenario. In this paper, we present a Manifold Regularized Multi-view Subspace Clustering (MRMSC) method to better incorporate the correlated and complementary information from different views. MRMSC yields a unified affinity representation by joint optimization across different views. To respect the data manifold locally, the graph Laplacian is constructed to maintain the intrinsic geometrical structure of each view. In the multi-view integration, a sparsity constraint is imposed to the unified affinity representation in order to better reflect the data relationship from multiple views or features. In experiments, we compared the performance of clustering using MRMSC with the single-view and concatenate-multi-view methods on different datasets. The results showed that better clustering performance can be achieved by fusing the multiple features with a unified affinity representation by MRMSC.


ieee embs international conference on biomedical and health informatics | 2016

A three-dimensional medical image segmentation app using graphic theory

Tiancheng He; Zhong Xue; Stephen T. C. Wong

We present a smartphone App for three-dimensional (3-D) medical image segmentation aimed at providing a portable image computing solution for physicians. Smartphones have become powerful portable devices to not only visualize images but also provide basic annotation tasks for diagnosis and reviewing treatment planning. In this paper, we present a fast region of interest (ROI) segmentation App for medical images. Using CT lung images, we show that such images can be streamed onto the smartphone from the picture archiving and communication system (PACS), and tumor or other ROIs are segmented by simply pinpointing initial points inside and outside an ROI on the image. Image visualization is also provided so that radiological report or surgical (e.g., intervention) planning can be accomplished through the interface. To ensure fast computation with the limited CPU memory, a graph-based segmentation algorithm is tailored to occupy less memory using a streaming-style computation. Once the image is loaded, a graph is computed to reflect the spatial-intensity relationship among the voxels. Then, given initial points inside and outside an ROI, segmentation is performed based on the graph. Quantitative results show that fast and accurate segmentation can be achieved using the proposed method. We expect that such Apps can be utilized for image reading, ROI annotation, or surgical planning review in the areas of radiology, intervention and oncology.


Proceedings of SPIE | 2012

IntegriSense molecular image sequence classification using Gaussian mixture model

Tiancheng He; Zhong Xue; Kongkuo Lu; Miguel Valdivia y Alvarado; Stephen T. C. Wong

Targeted fluorescence imaging agents such as IntegriSense 680 can be used to label integrin αvβ3 expressed in tumor cells and to distinguish tumor from normal tissues. Coupled with endomicroscopy and image-guided intervention devices, fluorescence contrast captured from the fiber-optic imaging technique can be used in a Minimally Invasive Multimodality Image Guided (MIMIG) system for on-site peripheral lung cancer diagnosis. In this work, we propose an automatic quantification approach for IntegriSense-based fluorescence endomicroscopy image sequences. First, a sliding time-window is used to calculate the histogram of the frames at a given timepoint, also denoted as the IntegriSense signal. The intensity distributions of the endomicroscopy image sequences can be briefly classified into three groups: high, middle and low intensities, which might correspond to tumor, normal tissue, and background (air) tissues within the lungs, respectively. At a given time-point, the histogram calculated from the sliding time-window is fit with a Gaussian mixture model, and the average and standard deviation (std), as well as the weight of each Gaussian distribution can be identified. Finally, a threshold can be used to the weighting parameter of the high intensity group for tumor information detection. This algorithm can be used as an automatic tumor detection tool from IntegriSense-based endomicroscopy. In experiments, we validated the algorithm using 20 IntegriSense-based fluorescence endomicroscopy image sequences collected from 6 rabbit experiments, where VX2 tumor was implanted into the lung of each rabbit, and image-guided endomicroscopy was performed. The automatic classification results were compared with manual results, and high sensitivity and specificity were obtained.


Proceedings of SPIE | 2012

Automatic segmentation and centroid detection of skin sensors for lung interventions

Kongkuo Lu; Sheng Xu; Zhong Xue; Stephen T. C. Wong

Electromagnetic (EM) tracking has been recognized as a valuable tool for locating the interventional devices in procedures such as lung and liver biopsy or ablation. The advantage of this technology is its real-time connection to the 3D volumetric roadmap, i.e. CT, of a patients anatomy while the intervention is performed. EM-based guidance requires tracking of the tip of the interventional device, transforming the location of the device onto pre-operative CT images, and superimposing the device in the 3D images to assist physicians to complete the procedure more effectively. A key requirement of this data integration is to find automatically the mapping between EM and CT coordinate systems. Thus, skin fiducial sensors are attached to patients before acquiring the pre-operative CTs. Then, those sensors can be recognized in both CT and EM coordinate systems and used calculate the transformation matrix. In this paper, to enable the EM-based navigation workflow and reduce procedural preparation time, an automatic fiducial detection method is proposed to obtain the centroids of the sensors from the pre-operative CT. The approach has been applied to 13 rabbit datasets derived from an animal study and eight human images from an observation study. The numerical results show that it is a reliable and efficient method for use in EM-guided application.

Collaboration


Dive into the Zhong Xue's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tiancheng He

Houston Methodist Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bin S. Teh

Houston Methodist Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sheng Xu

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Fang Nie

Houston Methodist Hospital

View shared research outputs
Top Co-Authors

Avatar

Hong Zhao

Houston Methodist Hospital

View shared research outputs
Top Co-Authors

Avatar

Jiajun Wang

Houston Methodist Hospital

View shared research outputs
Researchain Logo
Decentralizing Knowledge