Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Erkang Cheng is active.

Publication


Featured researches published by Erkang Cheng.


international conference on computer vision | 2011

Blurred target tracking by Blur-driven Tracker

Yi Wu; Haibin Ling; Jingyi Yu; Feng Li; Xue Mei; Erkang Cheng

Visual tracking plays an important role in many computer vision tasks. A common assumption in previous methods is that the video frames are blur free. In reality, motion blurs are pervasive in the real videos. In this paper we present a novel BLUr-driven Tracker (BLUT) framework for tracking motion-blurred targets. BLUT actively uses the information from blurs without performing debluring. Specifically, we integrate the tracking problem with the motion-from-blur problem under a unified sparse approximation framework. We further use the motion information inferred by blurs to guide the sampling process in the particle filter based tracking. To evaluate our method, we have collected a large number of video sequences with significatcant motion blurs and compared BLUT with state-of-the-art trackers. Experimental results show that, while many previous methods are sensitive to motion blurs, BLUT can robustly and reliably track severely blurred targets.


machine vision applications | 2014

Discriminative vessel segmentation in retinal images by fusing context-aware hybrid features

Erkang Cheng; Liang Du; Yi Wu; Ying J. Zhu; Vasileios Megalooikonomou; Haibin Ling

Vessel segmentation is an important problem in medical image analysis and is often challenging due to large variations in vessel appearance and profiles, as well as image noises. To address these challenges, we propose a solution by combining heterogeneous context-aware features with a discriminative learning framework. Our solution is characterized by three key ingredients: First, we design a hybrid feature pool containing recently invented descriptors including the stroke width transform (SWT) and Weber’s local descriptors (WLD), as well as classical local features including intensity values, Gabor responses and vesselness measurements. Second, we encode context information by sampling the hybrid features from an orientation invariant local context. Third, we treat pixel-level vessel segmentation as a discriminative classification problem, and use a random forest to fuse the rich information encoded in the hybrid context-aware features. For evaluation, the proposed method is applied to retinal vessel segmentation using three publicly available benchmark datasets. On the DRIVE and STARE datasets, our approach achieves average classification accuracies of 0.9474 and 0.9633, respectively. On the high-resolution dataset HRFID, our approach achieves average classification accuracies of 0.9647, 0.9561 and 0.9634 on three different categories, respectively. Experiments are also conducted to validate the superiority of hybrid feature fusion over each individual component.


international symposium on biomedical imaging | 2010

Mammographic image classification using histogram intersection

Erkang Cheng; Nianhua Xie; Haibin Ling; Predrag R. Bakic; Andrew D. A. Maidment; Vasileios Megalooikonomou

In this paper we propose using histogram intersection for mammographic image classification. First, we use the bag-of-words model for image representation, which captures the texture information by collecting local patch statistics. Then, we propose using normalized histogram intersection (HI) as a similarity measure with the K-nearest neighbor (KNN) classifier. Furthermore, by taking advantage of the fact that HI forms a Mercer kernel, we combine HI with support vector machines (SVM), which further improves the classification performance. The proposed methods are evaluated on a galactographic dataset and are compared with several previously used methods. In a thorough evaluation containing about 288 different experimental configurations, the proposed methods demonstrate promising results.


international symposium on biomedical imaging | 2012

Learning-based automatic breast tumor detection and segmentation in ultrasound images

Peng Jiang; Jingliang Peng; Guoquan Zhang; Erkang Cheng; Vasileios Megalooikonomou; Haibin Ling

Ultrasound (US) images have been widely used in the diagnosis of breast cancer in particular. While experienced doctors may locate the tumor regions in a US image manually, it is highly desirable to develop algorithms that automatically detect the tumor regions in order to assist medical diagnosis. In this paper, we propose a novel algorithm for automatic detection of breast tumors in US images. We formulate the tumor detection as a two step learning problem: tumor localization by bounding box and exact boundary delineation. Specifically, the proposed method uses an AdaBoost classifier on Harr-like features to detect a preliminary set of tumor regions. The preliminarily detected tumor regions are further screened with a support vector machine using quantized intensity features. Finally, the random walk segmentation algorithm is performed on the US image to retrieve the boundary of each detected tumor region. The proposed method has been evaluated on a data set containing 112 breast US images, including histologically confirmed 80 diseased ones and 32 normal ones. The data set contains one image from each patient and the patients are from 31 to 75 years old. Experiments demonstrate that the proposed algorithm can automatically detect breast tumors, with their locations and boundary shapes retrieved with high accuracy.


Proceedings of SPIE | 2014

Efficient feature extraction from wide-area motion imagery by MapReduce in Hadoop

Erkang Cheng; Liya Ma; Adam Blaisse; Erik Blasch; Carolyn Sheaff; Genshe Chen; Jie Wu; Haibin Ling

Wide-Area Motion Imagery (WAMI) feature extraction is important for applications such as target tracking, traffic management and accident discovery. With the increasing amount of WAMI collections and feature extraction from the data, a scalable framework is needed to handle the large amount of information. Cloud computing is one of the approaches recently applied in large scale or big data. In this paper, MapReduce in Hadoop is investigated for large scale feature extraction tasks for WAMI. Specifically, a large dataset of WAMI images is divided into several splits. Each split has a small subset of WAMI images. The feature extractions of WAMI images in each split are distributed to slave nodes in the Hadoop system. Feature extraction of each image is performed individually in the assigned slave node. Finally, the feature extraction results are sent to the Hadoop File System (HDFS) to aggregate the feature information over the collected imagery. Experiments of feature extraction with and without MapReduce are conducted to illustrate the effectiveness of our proposed Cloud-Enabled WAMI Exploitation (CAWE) approach.


international conference of the ieee engineering in medicine and biology society | 2011

Automatic Dent-landmark detection in 3-D CBCT dental volumes

Erkang Cheng; Jinwu Chen; Jie Yang; Huiyang Deng; Yi Wu; Vasileios Megalooikonomou; Bryce Gable; Haibin Ling

Orthodontic craniometric landmarks provide critical information in oral and maxillofacial imaging diagnosis and treatment planning. The Dent-landmark, defined as the odontoid process of the epistropheus, is one of the key landmarks to construct the midsagittal reference plane. In this paper, we propose a learning-based approach to automatically detect the Dent-landmark in the 3D cone-beam computed tomography (CBCT) dental data. Specifically, a detector is learned using the random forest with sampled context features. Furthermore, we use spacial prior to build a constrained search space other than use the full three dimensional space. The proposed method has been evaluated on a dataset containing 73 CBCT dental volumes and yields promising results.


international symposium on visual computing | 2011

Kernel-based motion-blurred target tracking

Yi Wu; Jing Hu; Feng Li; Erkang Cheng; Jingyi Yu; Haibin Ling

Motion blurs are pervasive in real captured video data, especially for hand-held cameras and smartphone cameras because of their low frame rate and material quality. This paper presents a novel Kernel-based motion-Blurred target Tracking (KBT) approach to accurately locate objects in motion blurred video sequence, without explicitly performing deblurring. To model the underlying motion blurs, we first augment the target model by synthesizing a set of blurred templates from the target with different blur directions and strengths. These templates are then represented by color histograms regularized by an isotropic kernel. To locate the optimal position for each template, we choose to use the mean shift method for iterative optimization. Finally, the optimal region with maximum similarity to its corresponding template is considered as the target. To demonstrate the effectiveness and efficiency of our method, we collect several video sequences with severe motion blurs and compare KBT with other traditional trackers. Experimental results show that our KBT method can robustly and reliably track strong motion blurred targets.


computer vision and pattern recognition | 2014

Curvilinear Structure Tracking by Low Rank Tensor Approximation with Model Propagation

Erkang Cheng; Yu Pang; Ying J. Zhu; Jingyi Yu; Haibin Ling

Robust tracking of deformable object like catheter or vascular structures in X-ray images is an important technique used in image guided medical interventions for effective motion compensation and dynamic multi-modality image fusion. Tracking of such anatomical structures and devices is very challenging due to large degrees of appearance changes, low visibility of X-ray images and the deformable nature of the underlying motion field as a result of complex 3D anatomical movements projected into 2D images. To address these issues, we propose a new deformable tracking method using the tensor-based algorithm with model propagation. Specifically, the deformable tracking is formulated as a multi-dimensional assignment problem which is solved by rank-1 l1 tensor approximation. The model prior is propagated in the course of deformable tracking. Both the higher order information and the model prior provide powerful discriminative cues for reducing ambiguity arising from the complex background, and consequently improve the tracking robustness. To validate the proposed approach, we applied it to catheter and vascular structures tracking and tested on X-ray fluoroscopic sequences obtained from 17 clinical cases. The results show, both quantitatively and qualitatively, that our approach achieves a mean tracking error of 1.4 pixels for vascular structure and 1.3 pixels for catheter tracking.


Archive | 2012

Shape Based Conditional Random Fields for Segmenting Intracranial Aneurysms

Sajjad Hussain Baloch; Erkang Cheng; Tong Fang; Ying Zhu

Studies have found strong correlation between the risk of rupture of intracranial aneurysms and various physical measurements on the aneurysms, such as volume, surface area, neck length, among others. Accuracy of risk prediction relies on the accuracy of these quantities, which in turn, is determined by the precision of the underlying segmentation algorithm. In this paper, we propose an algorithm for the separation of aneurysms in pathological vessels. The approach is based on conditional random fields (CRF), and exploits regional shape properties for unary, and layout constraints for pair-wise potentials to achieve a high degree of accuracy. To this end, we construct very rich rotation invariant shape descriptors, and couple them with randomized decision trees to determine posterior probabilities. These probabilities define weak priors in the unary potentials, which are also combined with strong priors determined from user interaction. Pairwise potentials are used to impose smoothness as well as spatial ordering constraints. The proposed descriptor is independent of surface orientation, and is richer than existing approaches due to attribute weighting. The conditional probability of CRF is maximized through graph-cuts, and the approach is validated with real dataset w.r.t. the groundtruth, resulting in the area overlap ratio of 88.1%. Most importantly, it successfully solves the “touching vessel leaking” problem. ? Corresponding author: Sajjad Baloch, Siemens Corporate Research, Princeton, NJ, [email protected]


ieee international conference on healthcare informatics, imaging and systems biology | 2011

Learning-Based Vessel Segmentation in Mammographic Images

Erkang Cheng; Shawn McLaughlin; Vasileios Megalooikonomou; Predrag R. Bakic; Andrew D. A. Maidment; Haibin Ling

In this paper we propose using a learning-based method for vessel segmentation in mammographic images. To capture the large variation in vessel patterns not only across subjects, but also within a subject, we create a feature pool containing local, Gabor and Haar features extracted from mammographic images generating a feature space of very high dimension. We also employ a huge number of training samples, which essentially contains the pixels in the training images. To deal with the very high dimensional feature space and the huge number of training samples, we apply a forest with boosting trees for vessel segmentation. Specifically, we use the standard AdaBoost algorithm for each tree in the forest. The randomness is encoded, when training each AdaBoost tree, using randomly sampled training set (pixels) and randomly selected features from the whole feature pool. The proposed method is tested using a real dataset with 20 anonymous mammographic images. The effectiveness of the proposed features and classifiers is demonstrated in the experiments where we compare different approaches and feature combinations. In the paper, we also present full analysis of different types of features.

Collaboration


Dive into the Erkang Cheng's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yi Wu

Nanjing University of Information Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Predrag R. Bakic

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jingyi Yu

University of Delaware

View shared research outputs
Top Co-Authors

Avatar

Lav Rai

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge