Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zhiyun Xue is active.

Publication


Featured researches published by Zhiyun Xue.


IEEE Transactions on Medical Imaging | 2014

Lung Segmentation in Chest Radiographs Using Anatomical Atlases With Nonrigid Registration

Sema Candemir; Stefan Jaeger; Kannappan Palaniappan; Jonathan P. Musco; Rahul Singh; Zhiyun Xue; Alexandros Karargyris; Sameer K. Antani; George R. Thoma; Clement J. McDonald

The National Library of Medicine (NLM) is developing a digital chest X-ray (CXR) screening system for deployment in resource constrained communities and developing countries worldwide with a focus on early detection of tuberculosis. A critical component in the computer-aided diagnosis of digital CXRs is the automatic detection of the lung regions. In this paper, we present a nonrigid registration-driven robust lung segmentation method using image retrieval-based patient specific adaptive lung models that detects lung boundaries, surpassing state-of-the-art performance. The method consists of three main stages: 1) a content-based image retrieval approach for identifying training images (with masks) most similar to the patient CXR using a partial Radon transform and Bhattacharyya shape similarity measure, 2) creating the initial patient-specific anatomical model of lung shape using SIFT-flow for deformable registration of training masks to the patient CXR, and 3) extracting refined lung boundaries using a graph cuts optimization approach with a customized energy function. Our average accuracy of 95.4% on the public JSRT database is the highest among published results. A similar degree of accuracy of 94.1% and 91.7% on two new CXR datasets from Montgomery County, MD, USA, and India, respectively, demonstrates the robustness of our lung segmentation approach.


IEEE Transactions on Medical Imaging | 2014

Automatic Tuberculosis Screening Using Chest Radiographs

Stefan Jaeger; Alexandros Karargyris; Sema Candemir; Les R. Folio; Jenifer Siegelman; Fiona M. Callaghan; Zhiyun Xue; Kannappan Palaniappan; Rahul K. Singh; Sameer K. Antani; George R. Thoma; Yi-Xiang J. Wang; Pu-Xuan Lu; Clement J. McDonald

Tuberculosis is a major health threat in many regions of the world. Opportunistic infections in immunocompromised HIV/AIDS patients and multi-drug-resistant bacterial strains have exacerbated the problem, while diagnosing tuberculosis still remains a challenge. When left undiagnosed and thus untreated, mortality rates of patients with tuberculosis are high. Standard diagnostics still rely on methods developed in the last century. They are slow and often unreliable. In an effort to reduce the burden of the disease, this paper presents our automated approach for detecting tuberculosis in conventional posteroanterior chest radiographs. We first extract the lung region using a graph cut segmentation method. For this lung region, we compute a set of texture and shape features, which enable the X-rays to be classified as normal or abnormal using a binary classifier. We measure the performance of our system on two datasets: a set collected by the tuberculosis control program of our local countys health department in the United States, and a set collected by Shenzhen Hospital, China. The proposed computer-aided diagnostic system for TB screening, which is ready for field deployment, achieves a performance that approaches the performance of human experts. We achieve an area under the ROC curve (AUC) of 87% (78.3% accuracy) for the first set, and an AUC of 90% (84% accuracy) for the second set. For the first set, we compare our system performance with the performance of radiologists. When trying not to miss any positive cases, radiologists achieve an accuracy of about 82% on this set, and their false positive rate is about half of our systems rate.


Journal of the Association for Information Science and Technology | 2013

Image retrieval from scientific publications: Text and image content processing to separate multipanel figures

Emilia Apostolova; Daekeun You; Zhiyun Xue; Sameer K. Antani; Dina Demner-Fushman; George R. Thoma

Images contained in scientific publications are widely considered useful for educational and research purposes, and their accurate indexing is critical for efficient and effective retrieval. Such image retrieval is complicated by the fact that figures in the scientific literature often combine multiple individual subfigures (panels). Multipanel figures are in fact the predominant pattern in certain types of scientific publications. The goal of this work is to automatically segment multipanel figures—a necessary step for automatic semantic indexing and in the development of image retrieval systems targeting the scientific literature. We have developed a method that uses the image content as well as the associated figure caption to: (1) automatically detect panel boundaries; (2) detect panel labels in the images and convert them to text; and (3) detect the labels and textual descriptions of each panel within the captions. Our approach combines the output of image‐content and text‐based processing steps to split the multipanel figures into individual subfigures and assign to each subfigure its corresponding section of the caption. The developed system achieved precision of 81% and recall of 73% on the task of automatic segmentation of multipanel figures.


Proceedings of SPIE, the International Society for Optical Engineering | 2008

Tissue classification using cluster features for lesion detection in digital cervigrams

Xiaolei Huang; Wei Wang; Zhiyun Xue; Sameer K. Antani; L. Rodney Long; Jose Jeronimo

In this paper, we propose a new method for automated detection and segmentation of different tissue types in digitized uterine cervix images using mean-shift clustering and support vector machines (SVM) classification on cluster features. We specifically target the segmentation of precancerous lesions in a NCI/NLM archive of 60,000 cervigrams. Due to large variations in image appearance in the archive, color and texture features of a tissue type in one image often overlap with that of a different tissue type in another image. This makes reliable tissue segmentation in a large number of images a very challenging problem. In this paper, we propose the use of powerful machine learning techniques such as Support Vector Machines (SVM) to learn, from a database with ground truth annotations, critical visual signs that correlate with important tissue types and to use the learned classifier for tissue segmentation in unseen images. In our experiments, SVM performs better than un-supervised methods such as Gaussian Mixture clustering, but it does not scale very well to large training sets and does not always guarantee improved performance given more training data. To address this problem, we combine SVM and clustering so that the features we extracted for classification are features of clusters returned by the mean-shift clustering algorithm. Compared to classification using individual pixel features, classification by cluster features greatly reduces the dimensionality of the problem, thus it is more efficient while producing results with comparable accuracy.


Medical Imaging 2007: Image Processing | 2007

Comparative performance analysis of cervix ROI extraction and specular reflection removal algorithms for uterine cervix image analysis

Zhiyun Xue; Sameer K. Antani; L. Rodney Long; Jose Jeronimo; George R. Thoma

Cervicography is a technique for visual screening of uterine cervix images for cervical cancer. One of our research goals is the automated detection in these images of acetowhite (AW) lesions, which are sometimes correlated with cervical cancer. These lesions are characterized by the whitening of regions along the squamocolumnar junction on the cervix when treated with 5% acetic acid. Image preprocessing is required prior to invoking AW detection algorithms on cervicographic images for two reasons: (1) to remove Specular Reflections (SR) caused by camera flash, and (2) to isolate the cervix region-of-interest (ROI) from image regions that are irrelevant to the analysis. These image regions may contain medical instruments, film markup, or other non-cervix anatomy or regions, such as vaginal walls. We have qualitatively and quantitatively evaluated the performance of alternative preprocessing algorithms on a test set of 120 images. For cervix ROI detection, all approaches use a common feature set, but with varying combinations of feature weights, normalization, and clustering methods. For SR detection, while one approach uses a Gaussian Mixture Model on an intensity/saturation feature set, a second approach uses Otsu thresholding on a top-hat transformed input image. Empirical results are analyzed to derive conclusions on the performance of each approach.


signal processing systems | 2009

Balancing the Role of Priors in Multi-Observer Segmentation Evaluation

Yaoyao Zhu; Xiaolei Huang; Wei Wang; Daniel P. Lopresti; L. Rodney Long; Sameer K. Antani; Zhiyun Xue; George R. Thoma

Comparison of a group of multiple observer segmentations is known to be a challenging problem. A good segmentation evaluation method would allow different segmentations not only to be compared, but to be combined to generate a “true” segmentation with higher consensus. Numerous multi-observer segmentation evaluation approaches have been proposed in the literature, and STAPLE in particular probabilistically estimates the true segmentation by optimal combination of observed segmentations and a prior model of the truth. An Expectation–Maximization (EM) algorithm, STAPLE’s convergence to the desired local minima depends on good initializations for the truth prior and the observer-performance prior. However, accurate modeling of the initial truth prior is nontrivial. Moreover, among the two priors, the truth prior always dominates so that in certain scenarios when meaningful observer-performance priors are available, STAPLE can not take advantage of that information. In this paper, we propose a Bayesian decision formulation of the problem that permits the two types of prior knowledge to be integrated in a complementary manner in four cases with differing application purposes: (1) with known truth prior; (2) with observer prior; (3) with neither truth prior nor observer prior; and (4) with both truth prior and observer prior. The third and fourth cases are not discussed (or effectively ignored) by STAPLE, and in our research we propose a new method to combine multiple-observer segmentations based on the maximum a posterior (MAP) principle, which respects the observer prior regardless of the availability of the truth prior. Based on the four scenarios, we have developed a web-based software application that implements the flexible segmentation evaluation framework for digitized uterine cervix images. Experiment results show that our framework has flexibility in effectively integrating different priors for multi-observer segmentation evaluation and it also generates results comparing favorably to those by the STAPLE algorithm and the Majority Vote Rule.


Proceedings of SPIE | 2009

Segmentation of mosaicism in cervicographic images using support vector machines

Zhiyun Xue; L. Rodney Long; Sameer K. Antani; Jose Jeronimo; George R. Thoma

The National Library of Medicine (NLM), in collaboration with the National Cancer Institute (NCI), is creating a large digital repository of cervicographic images for the study of uterine cervix cancer prevention. One of the research goals is to automatically detect diagnostic bio-markers in these images. Reliable bio-marker segmentation in large biomedical image collections is a challenging task due to the large variation in image appearance. Methods described in this paper focus on segmenting mosaicism, which is an important vascular feature used to visually assess the degree of cervical intraepithelial neoplasia. The proposed approach uses support vector machines (SVM) trained on a ground truth dataset annotated by medical experts (which circumvents the need for vascular structure extraction). We have evaluated the performance of the proposed algorithm and experimentally demonstrated its feasibility.


international symposium on biomedical imaging | 2009

A classifier ensemble based on performance level estimation

Wei Wang; Yaoyao Zhu; Xiaolei Huang; Daniel P. Lopresti; Zhiyun Xue; L. Rodney Long; Sameer K. Antani; George R. Thoma

In this paper, we introduce a new classifier ensemble approach, applied to tissue segmentation in optical images of the uterine cervix. Ensemble methods combine the predictions of a set of diverse classifiers. The main contribution of our approach is an effective way of combination based on each classifiers performance level—namely, the sensitivity p and specificity q, which also produces an optimal estimate of the true segmentation. In comparison with previous work [1] that utilizes the STAPLE algorithm [2] for performance level based combination, this work achieves multiple-observer segmentation in a Bayesian decision framework using the maximum a posterior (MAP) principle, considering each classifier as an observer. In our experiments, we applied our method and several other popular ensemble methods to the problem of detecting Acetowhite regions in cervical images. On 100 images, the overall performance of the proposed method is better than: (i) an overall classifier learned using the entire training set, (ii) average voting ensemble, (iii) ensemble based on the STAPLE algorithm; it is comparable to that of majority voting and that of the (manually picked) best-performing individual classifier in the ensemble set.


computer-based medical systems | 2015

Chest X-ray Image View Classification

Zhiyun Xue; Daekeun You; Sema Candemir; Stefan Jaeger; Sameer K. Antani; L. Rodney Long; George R. Thoma

The view information of a chest X-ray (CXR), such as frontal or lateral, is valuable in computer aided diagnosis (CAD) of CXRs. For example, it helps for the selection of atlas models for automatic lung segmentation. However, very often, the image header does not provide such information. In this paper, we present a new method for classifying a CXR into two categories: frontal view vs. lateral view. The method consists of three major components: image pre-processing, feature extraction, and classification. The features we selected are image profile, body size ratio, pyramid of histograms of orientation gradients, and our newly developed contour-based shape descriptor. The method was tested on a large (more than 8,200 images) CXR dataset hosted by the National Library of Medicine. The very high classification accuracy (over 99% for 10-fold cross validation) demonstrates the effectiveness of the proposed method.


Computerized Medical Imaging and Graphics | 2015

Literature-based biomedical image classification and retrieval

Matthew S. Simpson; Daekeun You; Md. Mahmudur Rahman; Zhiyun Xue; Dina Demner-Fushman; Sameer K. Antani; George R. Thoma

Literature-based image informatics techniques are essential for managing the rapidly increasing volume of information in the biomedical domain. Compound figure separation, modality classification, and image retrieval are three related tasks useful for enabling efficient access to the most relevant images contained in the literature. In this article, we describe approaches to these tasks and the evaluation of our methods as part of the 2013 medical track of ImageCLEF. In performing each of these tasks, the textual and visual features used to represent images are an important consideration often left unaddressed. Therefore, we also describe a gradient-based optimization strategy for determining meaningful combinations of features and apply the method to the image retrieval task. An evaluation of our optimization strategy indicates the method is capable of producing statistically significant improvements in retrieval performance. Furthermore, the results of the 2013 ImageCLEF evaluation demonstrate the effectiveness of our techniques. In particular, our text-based and mixed image retrieval methods ranked first among all the participating groups.

Collaboration


Dive into the Zhiyun Xue's collaboration.

Top Co-Authors

Avatar

Sameer K. Antani

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

George R. Thoma

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

L. Rodney Long

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Dina Demner-Fushman

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Sema Candemir

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Stefan Jaeger

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daekeun You

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Alexandros Karargyris

National Institutes of Health

View shared research outputs
Researchain Logo
Decentralizing Knowledge