Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yingxuan Zhu is active.

Publication


Featured researches published by Yingxuan Zhu.


international conference on information fusion | 2007

Evaluation of ICA based fusion of hyperspectral images for color display

Yingxuan Zhu; Pramod K. Varshney; Hao Chen

Hyperspectral imaging is becoming increasingly important in a variety of applications. These images contain a large number of contiguous bands to provide information at a fine spectral resolution and, therefore, cannot be displayed directly using an RGB color display. There has been some recent work on the problem of fusing hyperspectral images to three-band images for color display purposes. In this paper, we evaluate the performance of our recently proposed approach based on independent component analysis, correlation coefficient and mutual information (ICA- CCMI) to fuse the information from a large number of bands to three images suitable for color display. Depending on whether the reference images are available or not, several image quality metrics such as entropy and edge correlation have been proposed and employed to evaluate the fusion performance via three widely used hyperspectral image datasets.


Journal of Visual Communication and Image Representation | 2013

Image registration using BP-SIFT

Yingxuan Zhu; Samuel Cheng; Vladimir Stankovic; Lina Stankovic

Scale Invariant Feature Transform (SIFT) is a powerful technique for image registration. Although SIFT descriptors accurately extract invariant image characteristics around keypoints, the commonly used matching approaches of registration loosely represent the geometric information among descriptors. In this paper, we propose an image registration algorithm named BP-SIFT, where we formulate keypoint matching of SIFT descriptors as a global optimization problem and provide a suboptimum solution using belief propagation (BP). Experimental results show significant improvement over conventional SIFT-based matching with reasonable computation complexity.


Journal of remote sensing | 2011

ICA-based fusion for colour display of hyperspectral images

Yingxuan Zhu; Pramod K. Varshney; Hao Chen

Hyperspectral images contain data from a large number of contiguous bands and, therefore, cannot be displayed directly using a colour display system. In this paper, an independent component analysis-based (ICA-based) approach for the problem of fusing hyperspectral images to three-band images for colour display purposes is proposed. Correlation coefficient and mutual information (ICA-CCMI) are used as criteria for selecting three suitable independent components for colour representation. In addition, statistical evaluation metrics for the colour display results of hyperspectral images are provided and discussed in light of different visualization goals. A new quality metric motivated by the quality index is developed to evaluate the structural information of the colour display images. The performance of our approach is validated by applying it to three hyperspectral image datasets. The experimental results demonstrate promising performance for the ICA-CCMI algorithm, compared with existing principal component analysis-based (PCA-based) methods for visualization of hyperspectral images.


international conference on image processing | 2010

Interactive segmentation of medical images using belief propagation with level sets

Yingxuan Zhu; Samuel Cheng; Amrit L. Goel

In this paper, we propose an interactive segmentation method to apply user information during the segmentation of a specific anatomic structure. This method is formulated to use belief propagation to minimize a global cost function according to local level sets. The propagation starts with one user labeled point, and iteratively extends the user information from the labeled pixel to its neighborhood by calculating the beliefs of the pixels in the same level as the labeled pixel. Since the segmentation relies on both local user information and global image features, it is less interrupted by noise, and works well even the target is not obvious to its neighbor. The promising segmentation results also show that our method is robust to the objects with high shape variation and inhomogeneous intensity value appearance.


Proceedings of SPIE | 2010

An automatic system to detect and extract texts in medical images for de-identification

Yingxuan Zhu; Prabhdeep Singh; Khan M. Siddiqui; Michael Gillam

Recently, there is an increasing need to share medical images for research purpose. In order to respect and preserve patient privacy, most of the medical images are de-identified with protected health information (PHI) before research sharing. Since manual de-identification is time-consuming and tedious, so an automatic de-identification system is necessary and helpful for the doctors to remove text from medical images. A lot of papers have been written about algorithms of text detection and extraction, however, little has been applied to de-identification of medical images. Since the de-identification system is designed for end-users, it should be effective, accurate and fast. This paper proposes an automatic system to detect and extract text from medical images for de-identification purposes, while keeping the anatomic structures intact. First, considering the text have a remarkable contrast with the background, a region variance based algorithm is used to detect the text regions. In post processing, geometric constraints are applied to the detected text regions to eliminate over-segmentation, e.g., lines and anatomic structures. After that, a region based level set method is used to extract text from the detected text regions. A GUI for the prototype application of the text detection and extraction system is implemented, which shows that our method can detect most of the text in the images. Experimental results validate that our method can detect and extract text in medical images with a 99% recall rate. Future research of this system includes algorithm improvement, performance evaluation, and computation optimization.


Proceedings of SPIE, the International Society for Optical Engineering | 2008

Neuronal nuclei localization in 3D using level set and watershed segmentation from laser scanning microscopy images

Yingxuan Zhu; Eric C. Olson; Arun Subramanian; David H. Feiglin; Pramod K. Varshney; Andrzej Krol

Abnormalities of the number and location of cells are hallmarks of both developmental and degenerative neurological diseases. However, standard stereological methods are impractical for assigning each cells nucleus position within a large volume of brain tissue. We propose an automated approach for segmentation and localization of the brain cell nuclei in laser scanning microscopy (LSM) embryonic mouse brain images. The nuclei in these images are first segmented by using the level set (LS) and watershed methods in each optical plane. The segmentation results are further refined by application of information from adjacent optical planes and prior knowledge of nuclear shape. Segmentation is then followed with an algorithm for 3D localization of the centroid of nucleus (CN). Each volume of tissue is thus represented by a collection of centroids leading to an approximate 10,000-fold reduction in the data set size, as compared to the original image series. Our method has been tested on LSM images obtained from an embryonic mouse brain, and compared to the segmentation and CN localization performed by an expert. The average Euclidian distance between locations of CNs obtained using our method and those obtained by an expert is 1.58±1.24 µm, a value well within the ~5 µm average radius of each nucleus. We conclude that our approach accurately segments and localizes CNs within cell dense embryonic tissue.


international conference on image processing | 2007

Dimensionality Reduction of Hyperspectral Images for Color Display using Segmented Independent Component Analysis

Yingxuan Zhu; Pramod K. Varshney; Hao Chen

The problem of dimensionality reduction for color representation of hyperspectral images has received recent attention. In this paper, several independent component analysis (ICA) based approaches are proposed to reduce the dimensionality of hyperspectral images for visualization. We also develop a simple but effective method, based on correlation coefficient and mutual information (CCMI), to select the suitable independent components for RGB color representation. Experimental results are presented to illustrate the performance of our approaches.


Proceedings of SPIE | 2011

Automated determination of arterial input function for DCE-MRI of the prostate

Yingxuan Zhu; Ming-Ching Chang; Sandeep N. Gupta

Prostate cancer is one of the commonest cancers in the world. Dynamic contrast enhanced MRI (DCE-MRI) provides an opportunity for non-invasive diagnosis, staging, and treatment monitoring. Quantitative analysis of DCE-MRI relies on determination of an accurate arterial input function (AIF). Although several methods for automated AIF detection have been proposed in literature, none are optimized for use in prostate DCE-MRI, which is particularly challenging due to large spatial signal inhomogeneity. In this paper, we propose a fully automated method for determining the AIF from prostate DCE-MRI. Our method is based on modeling pixel uptake curves as gamma variate functions (GVF). First, we analytically compute bounds on GVF parameters for more robust fitting. Next, we approximate a GVF for each pixel based on local time domain information, and eliminate the pixels with false estimated AIFs using the deduced upper and lower bounds. This makes the algorithm robust to signal inhomogeneity. After that, according to spatial information such as similarity and distance between pixels, we formulate the global AIF selection as an energy minimization problem and solve it using a message passing algorithm to further rule out the weak pixels and optimize the detected AIF. Our method is fully automated without training or a priori setting of parameters. Experimental results on clinical data have shown that our method obtained promising detection accuracy (all detected pixels inside major arteries), and a very good match with expert traced manual AIF.


international symposium on biomedical imaging | 2010

Exploiting user labels with generalized distance transforms random field level sets

Yingxuan Zhu; Kinh Tieu

We present an approach for exploiting user labels with random field level sets in image segmentation. A sparse set of user labels is propagated to the rest of the image by computing a generalized distance transform which takes into account image intensity information. The region-based level set formulation is modified to use random field level sets whose range is restricted to the probability values. These two ideas are combined in a single level set functional. Improved results are shown on a liver segmentation task.


Proceedings of SPIE | 2011

A nonparametric segmentation method based on structural information using level sets

Yingxuan Zhu; Samuel Cheng; Amrit L. Goel

Segmentation plays an important role in medical imaging, a precise segmentation can significantly improve the accuracy of object detection and localization. Level set based model is robust in image segmentation, but the parameters of level set function are usually decided by empirical method, which discourages its application in medical area, because medical images are various and the users may not be familiar with parameters setting of level set method. In this paper, we present an automatic segmentation method based on variational level set formulation. This method is formulated by statistical measures and solved by using the Euler-Lagrange equation. The segmentation criteria of our method rely on structural similarities of the image, which are luminance, contrast, and correlation coefficients. These criteria are formulated into an energy function to maximize the structural difference between object and background in segmentation. The energy function is solved and implemented by using variational level set method. Unlike prevalent level set methods, the segmentation parameters of our approach are automatically decided by structural information of the image and updated during iteration, so our model is nonparametric. Moreover, our approach does not necessitate any training, nor any a priori assumption about probability density functions of statistical inference. Furthermore, our method is region-based without using gradients, and the parameters in our method are updated according to image information, so our method can significantly reduce computation costs in its numerical implementation. The segmentation results have shown that our method adequately captures the structural differences between object and background during segmentation.

Collaboration


Dive into the Yingxuan Zhu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hao Chen

Boise State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kinh Tieu

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Andrzej Krol

State University of New York Upstate Medical University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David H. Feiglin

State University of New York Upstate Medical University

View shared research outputs
Top Co-Authors

Avatar

Eric C. Olson

State University of New York Upstate Medical University

View shared research outputs
Researchain Logo
Decentralizing Knowledge