Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Changyang Li is active.

Publication


Featured researches published by Changyang Li.


computer vision and pattern recognition | 2015

Robust saliency detection via regularized random walks ranking

Changyang Li; Yuchen Yuan; Weidong Cai; Yong Xia; David Dagan Feng

In the field of saliency detection, many graph-based algorithms heavily depend on the accuracy of the pre-processed superpixel segmentation, which leads to significant sacrifice of detail information from the input image. In this paper, we propose a novel bottom-up saliency detection approach that takes advantage of both region-based features and image details. To provide more accurate saliency estimations, we first optimize the image boundary selection by the proposed erroneous boundary removal. By taking the image details and region-based estimations into account, we then propose the regularized random walks ranking to formulate pixel-wised saliency maps from the superpixel-based background and foreground saliency estimations. Experiment results on two public datasets indicate the significantly improved accuracy and robustness of the proposed algorithm in comparison with 12 state-of-the-art saliency detection approaches.


IEEE Transactions on Biomedical Engineering | 2013

A Likelihood and Local Constraint Level Set Model for Liver Tumor Segmentation from CT Volumes

Changyang Li; Xiuying Wang; Stefan Eberl; Michael J. Fulham; Yong Yin; Jinhu Chen; David Dagan Feng

In computed tomography of liver tumors there is often heterogeneous density, weak boundaries, and the liver tumors are surrounded by other abdominal structures with similar densities. These pose limitations to accurate the hepatic tumor segmentation. We propose a level set model incorporating likelihood energy with the edge energy. The minimization of the likelihood energy approximates the density distribution of the target and the multimodal density distribution of the background that can have multiple regions. In the edge energy formulation, our edge detector preserves the ramp associated with the edges for weak boundaries. We compared our approach to the Chan-Vese and the geodesic level set models and the manual segmentation performed by clinical experts. The Chan-Vese model was not successful in segmenting hepatic tumors and our model outperformed the geodesic level set model. Our results on 18 clinical datasets showed that our algorithm had a Jaccard distance error of 14.4 ± 5.3%, relative volume difference of -8.1 ± 2.1%, average surface distance of 2.4 ± 0.8 mm, RMS surface distance of 2.9 ± 0.7 mm, and the maximum surface distance of 7.2 ± 3.1 mm.


Applied Physics Letters | 2004

Dielectric/metal sidewall diffusion barrier for Cu/porous ultralow-k interconnect technology

Zhe Chen; K. Prasad; Changyang Li; P.W. Lu; S.S. Su; L. J. Tang; D. Gui; S. Balakumar; R. Shu; Rakesh Kumar

With the acknowledged insufficiency of traditional Ta or TaN barriers, deposited by physical vapor deposition (PVD), in the Cu/porous ultralow-k intermetal dielectric integration, an amorphous hydrogenated SiC (a-SiC:H)/Ta bilayer sidewall diffusion barrier has been fabricated using 0.13 μm Cu/porous ultralow-k [Porous-SiLK (Proprietary product from Dow Chemical Corporation, USA), k∼2.2] single damascene process. The electrical tests show that the line-to-line leakage current and the electrical breakdown field (EBD) of samples with this a-SiC:H/Ta dielectric/metal bilayer structure are significantly improved compared to the conventional PVD multi-stacked Ta(N) sidewall barrier. This improvement is mostly due to surface roughness modification after the deposition of a-SiC:H film, which, in addition to being a good barrier to Cu diffusion, can effectively “seal” the weak points on the surface of porous low-k material that are responsible for the sidewall barrier failure.


Computer Methods and Programs in Biomedicine | 2012

Automated PET-guided liver segmentation from low-contrast CT volumes using probabilistic atlas

Changyang Li; Xiuying Wang; Yong Xia; Stefan Eberl; Yong Yin; David Dagan Feng

The use of the functional PET information from PET-CT scans to improve liver segmentation from low-contrast CT data is yet to be fully explored. In this paper, we fully utilize PET information to tackle challenging liver segmentation issues including (1) the separation and removal of the surrounding muscles from liver region of interest (ROI), (2) better localization and mapping of the probabilistic atlas onto the low-contrast CT for a more accurate tissue classification, and (3) an improved initial estimation of the liver ROI to speed up the convergence of the expectation-maximization (EM) algorithm for the Gaussian distribution mixture model under the guidance of a probabilistic atlas. The primary liver extraction from the PET volume provides a simple mechanism to avoid the complicated pre-processing of feature extraction as used in the existing liver CT segmentation methods. It is able to guide the probabilistic atlas to better conform to the CT liver region and hence helps to overcome the challenge posed by liver shape variability. Our proposed method was evaluated against manual segmentation by experienced radiologists. Experimental results on 35 clinical PET-CT studies demonstrated that our method is accurate and robust in automated normal liver segmentation.


international conference of the ieee engineering in medicine and biology society | 2015

Automated saliency-based lesion segmentation in dermoscopic images.

Euijoon Ahn; Lei Bi; Younhyun Jung; Jinman Kim; Changyang Li; Michael J. Fulham; David Dagan Feng

The segmentation of skin lesions in dermoscopic images is considered as one of the most important steps in computer-aided diagnosis (CAD) for automated melanoma diagnosis. Existing methods, however, have problems with over-segmentation and do not perform well when the contrast between the lesion and its surrounding skin is low. Hence, in this study, we propose a new automated saliency-based skin lesion segmentation (SSLS) that we designed to exploit the inherent properties of dermoscopic images, which have a focal central region and subtle contrast discrimination with the surrounding regions. The proposed method was evaluated on a public dataset of lesional dermoscopic images and was compared to established methods for lesion segmentation that included adaptive thresholding, Chan-based level set and seeded region growing. Our results show that SSLS outperformed the other methods in regard to accuracy and robustness, in particular, for difficult cases.


IEEE Journal of Biomedical and Health Informatics | 2017

Saliency-based Lesion Segmentation via Background Detection in Dermoscopic Images

Euijoon Ahn; Jinman Kim; Lei Bi; Ashnil Kumar; Changyang Li; Michael J. Fulham; David Dagan Feng

The segmentation of skin lesions in dermoscopic images is a fundamental step in automated computer-aided diagnosis of melanoma. Conventional segmentation methods, however, have difficulties when the lesion borders are indistinct and when contrast between the lesion and the surrounding skin is low. They also perform poorly when there is a heterogeneous background or a lesion that touches the image boundaries; this then results in under- and oversegmentation of the skin lesion. We suggest that saliency detection using the reconstruction errors derived from a sparse representation model coupled with a novel background detection can more accurately discriminate the lesion from surrounding regions. We further propose a Bayesian framework that better delineates the shape and boundaries of the lesion. We also evaluated our approach on two public datasets comprising 1100 dermoscopic images and compared it to other conventional and state-of-the-art unsupervised (i.e., no training required) lesion segmentation methods, as well as the state-of-the-art unsupervised saliency detection methods. Our results show that our approach is more accurate and robust in segmenting lesions compared to other methods. We also discuss the general extension of our framework as a saliency optimization algorithm for lesion segmentation.


IEEE Transactions on Image Processing | 2013

Robust Model for Segmenting Images With/Without Intensity Inhomogeneities

Changyang Li; Xiuying Wang; Stefan Eberl; Michael J. Fulham; David Dagan Feng

Intensity inhomogeneities and different types/levels of image noise are the two major obstacles to accurate image segmentation by region-based level set models. To provide a more general solution to these challenges, we propose a novel segmentation model that considers global and local image statistics to eliminate the influence of image noise and to compensate for intensity inhomogeneities. In our model, the global energy derived from a Gaussian model estimates the intensity distribution of the target object and background; the local energy derived from the mutual influences of neighboring pixels can eliminate the impact of image noise and intensity inhomogeneities. The robustness of our method is validated on segmenting synthetic images with/without intensity inhomogeneities, and with different types/levels of noise, including Gaussian noise, speckle noise, and salt and pepper noise, as well as images from different medical imaging modalities. Quantitative experimental comparisons demonstrate that our method is more robust and more accurate in segmenting the images with intensity inhomogeneities than the local binary fitting technique and its more recent systematic model. Our technique also outperformed the region-based Chan-Vese model when dealing with images without intensity inhomogeneities and produce better segmentation results than the graph-based algorithms including graph-cuts and random walker when segmenting noisy images.


IEEE Journal of Biomedical and Health Informatics | 2013

Joint Probabilistic Model of Shape and Intensity for Multiple Abdominal Organ Segmentation From Volumetric CT Images

Changyang Li; Xiuying Wang; Junli Li; Stefan Eberl; Michael J. Fulham; Yong Yin; David Dagan Feng

We propose a novel joint probabilistic model that correlates a new probabilistic shape model with the corresponding global intensity distribution to segment multiple abdominal organs simultaneously. Our probabilistic shape model estimates the probability of an individual voxel belonging to the estimated shape of the object. The probability density of the estimated shape is derived from a combination of the shape variations of target class and the observed shape information. To better capture the shape variations, we used probabilistic principle component analysis optimized by expectation maximization to capture the shape variations and reduce computational complexity. The maximum a posteriori estimation was optimized by the iterated conditional mode-expectation maximization. We used 72 training datasets including low- and high-contrast computed tomography images to construct the shape models for the liver, spleen, and both kidneys. We evaluated our algorithm on 40 test datasets that were grouped into normal (34 normal cases) and pathologic (six datasets) classes. The testing datasets were from different databases and manual segmentation was performed by different clinicians. We measured the volumetric overlap percentage error, relative volume difference, average square symmetric surface distance, false positive rate, and false negative rate and our method achieved accurate and robust segmentation for multiple abdominal organs simultaneously.


Computerized Medical Imaging and Graphics | 2016

Adapting content-based image retrieval techniques for the semantic annotation of medical images

Ashnil Kumar; Shane Dyer; Jinman Kim; Changyang Li; Philip Heng Wai Leong; Michael J. Fulham; Dagan Feng

The automatic annotation of medical images is a prerequisite for building comprehensive semantic archives that can be used to enhance evidence-based diagnosis, physician education, and biomedical research. Annotation also has important applications in the automatic generation of structured radiology reports. Much of the prior research work has focused on annotating images with properties such as the modality of the image, or the biological system or body region being imaged. However, many challenges remain for the annotation of high-level semantic content in medical images (e.g., presence of calcification, vessel obstruction, etc.) due to the difficulty in discovering relationships and associations between low-level image features and high-level semantic concepts. This difficulty is further compounded by the lack of labelled training data. In this paper, we present a method for the automatic semantic annotation of medical images that leverages techniques from content-based image retrieval (CBIR). CBIR is a well-established image search technology that uses quantifiable low-level image features to represent the high-level semantic content depicted in those images. Our method extends CBIR techniques to identify or retrieve a collection of labelled images that have similar low-level features and then uses this collection to determine the best high-level semantic annotations. We demonstrate our annotation method using retrieval via weighted nearest-neighbour retrieval and multi-class classification to show that our approach is viable regardless of the underlying retrieval strategy. We experimentally compared our method with several well-established baseline techniques (classification and regression) and showed that our method achieved the highest accuracy in the annotation of liver computed tomography (CT) images.


BMC Bioinformatics | 2016

DeepGene: an advanced cancer type classifier based on deep learning and somatic point mutations

Yuchen Yuan; Yi Shi; Changyang Li; Jinman Kim; Weidong Cai; Ze-Guang Han; David Dagan Feng

BackgroundWith the developments of DNA sequencing technology, large amounts of sequencing data have become available in recent years and provide unprecedented opportunities for advanced association studies between somatic point mutations and cancer types/subtypes, which may contribute to more accurate somatic point mutation based cancer classification (SMCC). However in existing SMCC methods, issues like high data sparsity, small volume of sample size, and the application of simple linear classifiers, are major obstacles in improving the classification performance.ResultsTo address the obstacles in existing SMCC studies, we propose DeepGene, an advanced deep neural network (DNN) based classifier, that consists of three steps: firstly, the clustered gene filtering (CGF) concentrates the gene data by mutation occurrence frequency, filtering out the majority of irrelevant genes; secondly, the indexed sparsity reduction (ISR) converts the gene data into indexes of its non-zero elements, thereby significantly suppressing the impact of data sparsity; finally, the data after CGF and ISR is fed into a DNN classifier, which extracts high-level features for accurate classification. Experimental results on our curated TCGA-DeepGene dataset, which is a reformulated subset of the TCGA dataset containing 12 selected types of cancer, show that CGF, ISR and DNN all contribute in improving the overall classification performance. We further compare DeepGene with three widely adopted classifiers and demonstrate that DeepGene has at least 24% performance improvement in terms of testing accuracy.ConclusionsBased on deep learning and somatic point mutation data, we devise DeepGene, an advanced cancer type classifier, which addresses the obstacles in existing SMCC studies. Experiments indicate that DeepGene outperforms three widely adopted existing classifiers, which is mainly attributed to its deep learning module that is able to extract the high level features between combinatorial somatic point mutations and cancer types.

Collaboration


Dive into the Changyang Li's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael J. Fulham

Royal Prince Alfred Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stefan Eberl

Royal Prince Alfred Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ang Li

University of Sydney

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge