Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yinghuan Shi is active.

Publication


Featured researches published by Yinghuan Shi.


international conference on pattern recognition | 2010

Real-Time Abnormal Event Detection in Complicated Scenes

Yinghuan Shi; Yang Gao; Ruili Wang

In this paper, we proposed a novel real-time abnormal event detection framework that requires a short training period and has a fast processing speed. Our approach is based on phase correlation and our newly developed spatial-temporal co-occurrence Gaussian mixture models (STCOG)with the following steps: (i) a frame is divided into non-overlapping local regions; (ii) phase correlation is used to estimate the motion vectors between successive two frames for all corresponding local regions, and (iii) STCOG is used to model normal events and detect abnormal events if any deviation from the trained STCOG is found. Our proposed approach is also able to update the parameters incrementally and can be applied in complicated scenes. The proposed approach outperforms previous ones in terms of shorter training periods and lower computational complexity.


computer vision and pattern recognition | 2013

Prostate Segmentation in CT Images via Spatial-Constrained Transductive Lasso

Yinghuan Shi; Shu Liao; Yaozong Gao; Daoqiang Zhang; Yang Gao; Dinggang Shen

Accurate prostate segmentation in CT images is a significant yet challenging task for image guided radiotherapy. In this paper, a novel semi-automated prostate segmentation method is presented. Specifically, to segment the prostate in the current treatment image, the physician first takes a few seconds to manually specify the first and last slices of the prostate in the image space. Then, the prostate is segmented automatically by the proposed two steps: (i) The first step of prostate-likelihood estimation to predict the prostate likelihood for each voxel in the current treatment image, aiming to generate the 3-D prostate-likelihood map by the proposed Spatial-COnstrained Transductive LassO (SCOTO), (ii) The second step of multi-atlases based label fusion to generate the final segmentation result by using the prostate shape information obtained from the planning and previous treatment images. The experimental result shows that the proposed method outperforms several state-of-the-art methods on prostate segmentation in a real prostate CT dataset, consisting of 24 patients with 330 images. Moreover, it is also clinically feasible since our method just requires the physician to spend a few seconds on manual specification of the first and last slices of the prostate.


IEEE Transactions on Neural Networks | 2015

MRM-Lasso: A Sparse Multiview Feature Selection Method via Low-Rank Analysis

Wanqi Yang; Yang Gao; Yinghuan Shi; Longbing Cao

Learning about multiview data involves many applications, such as video understanding, image classification, and social media. However, when the data dimension increases dramatically, it is important but very challenging to remove redundant features in multiview feature selection. In this paper, we propose a novel feature selection algorithm, multiview rank minimization-based Lasso (MRM-Lasso), which jointly utilizes Lasso for sparse feature selection and rank minimization for learning relevant patterns across views. Instead of simply integrating multiple Lasso from view level, we focus on the performance of sample-level (sample significance) and introduce pattern-specific weights into MRM-Lasso. The weights are utilized to measure the contribution of each sample to the labels in the current view. In addition, the latent correlation across different views is successfully captured by learning a low-rank matrix consisting of pattern-specific weights. The alternating direction method of multipliers is applied to optimize the proposed MRM-Lasso. Experiments on four real-life data sets show that features selected by MRM-Lasso have better multiview classification performance than the baselines. Moreover, pattern-specific weights are demonstrated to be significant for learning about multiview data, compared with view-specific weights.


International Journal of Neural Systems | 2011

XCSc: A NOVEL APPROACH TO CLUSTERING WITH EXTENDED CLASSIFIER SYSTEM

Liangdong Shi; Yinghuan Shi; Yang Gao; Lin Shang; Yubin Yang

In this paper, we propose a novel approach to clustering noisy and complex data sets based on the eXtend Classifier Systems (XCS). The proposed approach, termed XCSc, has three main processes: (a) a learning process to evolve the rule population, (b) a rule compacting process to remove redundant rules after the learning process, and (c) a rule merging process to deal with the overlapping rules that commonly occur between the clusters. In the first process, we have modified the clustering mechanisms of the current available XCS and developed a new accelerate learning method to improve the quality of the evolved rule population. In the second process, an effective rule compacting algorithm is utilized. The rule merging process is based on our newly proposed agglomerative hierarchical rule merging algorithm, which comprises the following steps: (i) all the generated rules are modeled by a graph, with each rule representing a node; (ii) the vertices in the graph are merged to form a number of sub-graphs (i.e. rule clusters) under some pre-defined criteria, which generates the final rule set to represent the clusters; (iii) each data is re-checked and assigned to a cluster that it belongs to, guided by the final rule set. In our experiments, we compared the proposed XCSc with CHAMELEON, a benchmark algorithm well known for its excellent performance, on a number of challenging data sets. The results show that the proposed approach outperforms CHAMELEON in the successful rate, and also demonstrates good stability.


information processing in medical imaging | 2013

Automatic prostate MR image segmentation with sparse label propagation and domain-specific manifold regularization

Shu Liao; Yaozong Gao; Yinghuan Shi; Ambereen Yousuf; Ibrahim Karademir; Aytekin Oto; Dinggang Shen

Automatic prostate segmentation in MR images plays an important role in prostate cancer diagnosis. However, there are two main challenges: (1) Large inter-subject prostate shape variations; (2) Inhomogeneous prostate appearance. To address these challenges, we propose a new hierarchical prostate MR segmentation method, with the main contributions lying in the following aspects: First, the most salient features are learnt from atlases based on a subclass discriminant analysis (SDA) method, which aims to find a discriminant feature subspace by simultaneously maximizing the inter-class distance and minimizing the intra-class variations. The projected features, instead of only voxel-wise intensity, will be served as anatomical signature of each voxel. Second, based on the projected features, a new multi-atlases sparse label fusion framework is proposed to estimate the prostate likelihood of each voxel in the target image from the coarse level. Third, a domain-specific semi-supervised manifold regularization method is proposed to incorporate the most reliable patient-specific information identified by the prostate likelihood map to refine the segmentation result from the fine level. Our method is evaluated on a T2 weighted prostate MR image dataset consisting of 66 patients and compared with two state-of-the-art segmentation methods. Experimental results show that our method consistently achieves the highest segmentation accuracies than other methods under comparison.


Applied Intelligence | 2013

Transductive cost-sensitive lung cancer image classification

Yinghuan Shi; Yang Gao; Ruili Wang; Ying Zhang; Dong Wang

Previous computer-aided lung cancer image classification methods are all cost-blind, which assume that the misdiagnosis (categorizing a cancerous image as a normal one or categorizing a normal image as a cancerous one) costs are equal. In addition, previous methods usually require experienced pathologists to label a large amount of images as training samples. To this end, a novel transductive cost-sensitive method is proposed for lung cancer image classification on needle biopsies specimens, which only requires the pathologist to label a small amount of images. The proposed method analyzes lung cancer images in the following procedures: (i) an image capturing procedure to capture images from the needle biopsies specimens; (ii) a preprocessing procedure to segment the individual cells from the captured images; (iii) a feature extraction procedure to extract features (i.e. shape, color, texture and statistical information) from the obtained individual cells; (iv) a codebook learning procedure to learn a codebook on the extracted features by adopting k-means clustering, which aims to represent each image as a histogram over different codewords; (v) an image classification procedure to predict labels for testing images using the proposed multi-class cost-sensitive Laplacian regularized least squares (mCLRLS). We evaluate the proposed method on a real-image set provided by Bayi Hospital, which contains 271 images including normal ones and four types of cancerous ones (squamous carcinoma, adenocarcinoma, small cell cancer and nuclear atypia). The experimental results demonstrate that the proposed method achieves a lower cancer-misdiagnosis rate and lower total misdiagnosis costs comparing with previous methods, which includes the supervised learning approach (kNN, mcSVM and MCMI-AdaBoost), semi-supervised learning approach (LapRLS) and cost-sensitive approach (CS-SVM). Meanwhile, the experiments also disclose that both transductive and cost-sensitive settings are useful when only a small amount of training images are available.


Medical Physics | 2014

Interactive prostate segmentation using atlas-guided semi-supervised learning and adaptive feature selection.

Sang Hyun Park; Yaozong Gao; Yinghuan Shi; Dinggang Shen

PURPOSE Accurate prostate segmentation is necessary for maximizing the effectiveness of radiation therapy of prostate cancer. However, manual segmentation from 3D CT images is very time-consuming and often causes large intra- and interobserver variations across clinicians. Many segmentation methods have been proposed to automate this labor-intensive process, but tedious manual editing is still required due to the limited performance. In this paper, the authors propose a new interactive segmentation method that can (1) flexibly generate the editing result with a few scribbles or dots provided by a clinician, (2) fast deliver intermediate results to the clinician, and (3) sequentially correct the segmentations from any type of automatic or interactive segmentation methods. METHODS The authors formulate the editing problem as a semisupervised learning problem which can utilize a priori knowledge of training data and also the valuable information from user interactions. Specifically, from a region of interest near the given user interactions, the appropriate training labels, which are well matched with the user interactions, can be locally searched from a training set. With voting from the selected training labels, both confident prostate and background voxels, as well as unconfident voxels can be estimated. To reflect informative relationship between voxels, location-adaptive features are selected from the confident voxels by using regression forest and Fisher separation criterion. Then, the manifold configuration computed in the derived feature space is enforced into the semisupervised learning algorithm. The labels of unconfident voxels are then predicted by regularizing semisupervised learning algorithm. RESULTS The proposed interactive segmentation method was applied to correct automatic segmentation results of 30 challenging CT images. The correction was conducted three times with different user interactions performed at different time periods, in order to evaluate both the efficiency and the robustness. The automatic segmentation results with the original average Dice similarity coefficient of 0.78 were improved to 0.865-0.872 after conducting 55-59 interactions by using the proposed method, where each editing procedure took less than 3 s. In addition, the proposed method obtained the most consistent editing results with respect to different user interactions, compared to other methods. CONCLUSIONS The proposed method obtains robust editing results with few interactions for various wrong segmentation cases, by selecting the location-adaptive features and further imposing the manifold regularization. The authors expect the proposed method to largely reduce the laborious burdens of manual editing, as well as both the intra- and interobserver variability across clinicians.


computer vision and pattern recognition | 2014

Joint Coupled-Feature Representation and Coupled Boosting for AD Diagnosis

Yinghuan Shi; Heung Il Suk; Yang Gao; Dinggang Shen

Recently, there has been a great interest in computer- aided Alzheimers Disease (AD) and Mild Cognitive Impairment (MCI) diagnosis. Previous learning based methods defined the diagnosis process as a classification task and directly used the low-level features extracted from neu- roimaging data without considering relations among them. However, from a neuroscience point of view, its well known that a human brain is a complex system that multiple brain regions are anatomically connected and functionally interact with each other. Therefore, it is natural to hypothesize that the low-level features extracted from neuroimaging data are related to each other in some ways. To this end, in this paper, we first devise a coupled feature representation by utilizing intra-coupled and inter-coupled interaction relationship. Regarding multi-modal data fusion, we propose a novel coupled boosting algorithm that analyzes the pairwise coupled-diversity correlation between modalities. Specifically, we formulate a new weight updating function, which considers both incorrectly and inconsistently classified samples. In our experiments on the ADNI dataset, the proposed method presented the best performance with accuracies of 94.7% and 80.1% for AD vs. Normal Control (NC) and MCI vs. NC classifications, respectively, outperforming the competing methods and the state-of-the-art methods.


IEEE Transactions on Biomedical Engineering | 2013

Multimodal Sparse Representation-Based Classification for Lung Needle Biopsy Images

Yinghuan Shi; Yang Gao; Yubin Yang; Ying Zhang; Dong Wang

Lung needle biopsy image classification is a critical task for computer-aided lung cancer diagnosis. In this study, a novel method, multimodal sparse representation-based classification (mSRC), is proposed for classifying lung needle biopsy images. In the data acquisition procedure of our method, the cell nuclei are automatically segmented from the images captured by needle biopsy specimens. Then, features of three modalities (shape, color, and texture) are extracted from the segmented cell nuclei. After this procedure, mSRC goes through a training phase and a testing phase. In the training phase, three discriminative subdictionaries corresponding to the shape, color, and texture information are jointly learned by a genetic algorithm guided multimodal dictionary learning approach. The dictionary learning aims to select the topmost discriminative samples and encourage large disagreement among different subdictionaries. In the testing phase, when a new image comes, a hierarchical fusion strategy is applied, which first predicts the labels of the cell nuclei by fusing three modalities, then predicts the label of the image by majority voting. Our method is evaluated on a real image set of 4372 cell nuclei regions segmented from 271 images. These cell nuclei regions can be divided into five classes: four cancerous classes (corresponding to four types of lung cancer) plus one normal class (no cancer). The results demonstrate that the multimodal information is important for lung needle biopsy image classification. Moreover, compared to several state-of-the-art methods (LapRLS, MCMI-AB, mcSVM, ESRC, KSRC), the proposed mSRC can achieve significant improvement (mean accuracy of 88.1%, precision of 85.2%, recall of 92.8%, etc.), especially for classifying different cancerous types.


intelligent data engineering and automated learning | 2011

P 2 LSA and P 2 LSA+: two paralleled probabilistic latent semantic analysis algorithms based on the mapreduce model

Yan Jin; Yang Gao; Yinghuan Shi; Lin Shang; Ruili Wang; Yubin Yang

Two novel paralleled Probabilistic Latent Semantic Analysis (PLSA) algorithms based on the MapReduce model are proposed, which are P2LSA and P2LSA+, respectively. When dealing with a large-scale data set, P2LSA and P2LSA+ can improve the computing speed with the Hadoop platform. The Expectation-Maximization (EM) algorithm is often used in the traditional PLSA method to estimate two hidden parameter vectors, while the parallel PLSA is to implement the EM algorithm in parallel. The EM algorithm includes two steps: E-step and M-step. In P2LSA, the Map function is adopted to perform the E-step and the Reduce function is adopted to perform the M-step. However, all the intermediate results computed in the E-step need to be sent to the M-step. Transferring a large amount of data between the E-step and the M-step increases the burden on the network and the overall running time. Different from P2LSA, the Map function in P2LSA+ performs the E-step and M-step simultaneously. Therefore, the data transferred between the E-step and M-step is reduced and the performance is improved. Experiments are conducted to evaluate the performances of P2LSA and P2LSA+. The data set includes 20000 users and 10927 goods. The speedup curves show that the overall running time decrease as the number of computing nodes increases.Also, the overall running time demonstrates that P2LSA+ is about 3 times faster than P2LSA.

Collaboration


Dive into the Yinghuan Shi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dinggang Shen

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yaozong Gao

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Lei Wang

Information Technology University

View shared research outputs
Top Co-Authors

Avatar

Hujun Yin

University of Manchester

View shared research outputs
Top Co-Authors

Avatar

Shu Liao

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Luping Zhou

Information Technology University

View shared research outputs
Researchain Logo
Decentralizing Knowledge