Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jiawen Yao is active.

Publication


Featured researches published by Jiawen Yao.


IEEE Transactions on Image Processing | 2015

Background Subtraction Based on Low-Rank and Structured Sparse Decomposition

Xin Liu; Guoying Zhao; Jiawen Yao; Chun Qi

Low rank and sparse representation based methods, which make few specific assumptions about the background, have recently attracted wide attention in background modeling. With these methods, moving objects in the scene are modeled as pixel-wised sparse outliers. However, in many practical scenarios, the distributions of these moving parts are not truly pixel-wised sparse but structurally sparse. Meanwhile a robust analysis mechanism is required to handle background regions or foreground movements with varying scales. Based on these two observations, we first introduce a class of structured sparsity-inducing norms to model moving objects in videos. In our approach, we regard the observed sequence as being constituted of two terms, a low-rank matrix (background) and a structured sparse outlier matrix (foreground). Next, in virtue of adaptive parameters for dynamic videos, we propose a saliency measurement to dynamically estimate the support of the foreground. Experiments on challenging well known data sets demonstrate that the proposed approach outperforms the state-of-the-art methods and works effectively on a wide range of complex videos.Low rank and sparse representation based methods, which make few specific assumptions about the background, have recently attracted wide attention in background modeling. With these methods, moving objects in the scene are modeled as pixel-wised sparse outliers. However, in many practical scenarios, the distributions of these moving parts are not truly pixel-wised sparse but structurally sparse. Meanwhile a robust analysis mechanism is required to handle background regions or foreground movements with varying scales. Based on these two observations, we first introduce a class of structured sparsity-inducing norms to model moving objects in videos. In our approach, we regard the observed sequence as being constituted of two terms, a low-rank matrix (background) and a structured sparse outlier matrix (foreground). Next, in virtue of adaptive parameters for dynamic videos, we propose a saliency measurement to dynamically estimate the support of the foreground. Experiments on challenging well known data sets demonstrate that the proposed approach outperforms the state-of-the-art methods and works effectively on a wide range of complex videos.


medical image computing and computer assisted intervention | 2015

Accelerated Dynamic MRI Reconstruction with Total Variation and Nuclear Norm Regularization

Jiawen Yao; Zheng Xu; Xiaolei Huang; Junzhou Huang

In this paper, we propose a novel compressive sensing model for dynamic MR reconstruction. With total variation (TV) and nuclear norm (NN) regularization, our method can utilize both spatial and temporal redundancy in dynamic MR images. Due to the non-smoothness and non-separability of TV and NN terms, it is difficult to optimize the primal problem. To address this issue, we propose a fast algorithm by solving a primal-dual form of the original problem. The ergodic convergence rate of the proposed method is \(\mathcal{O}(1/N)\) for N iterations. In comparison with six state-of-the-art methods, extensive experiments on single-coil and multi-coil dynamic MR data demonstrate the superior performance of the proposed method in terms of both reconstruction accuracy and time complexity.


medical image computing and computer assisted intervention | 2016

Imaging Biomarker Discovery for Lung Cancer Survival Prediction

Jiawen Yao; Sheng Wang; Xinliang Zhu; Junzhou Huang

Solid tumors are heterogeneous tissues composed of a mixture of cells and have special tissue architectures. However, cellular heterogeneity, the differences in cell types are generally not reflected in molecular profilers or in recent histopathological image-based analysis of lung cancer, rendering such information underused. This paper presents the development of a computational approach in H&E stained pathological images to quantitatively describe cellular heterogeneity from different types of cells. In our work, a deep learning approach was first used for cell subtype classification. Then we introduced a set of quantitative features to describe cellular information. Several feature selection methods were used to discover significant imaging biomarkers for survival prediction. These discovered imaging biomarkers are consistent with pathological and biological evidence. Experimental results on two lung cancer data sets demonstrated that survival models bsuilt from the clinical imaging biomarkers have better prediction power than state-of-the-art methods using molecular profiling data and traditional imaging biomarkers.


medical image computing and computer assisted intervention | 2016

Subtype Cell Detection with an Accelerated Deep Convolution Neural Network

Sheng Wang; Jiawen Yao; Zheng Xu; Junzhou Huang

Robust cell detection in histopathological images is a crucial step in the computer-assisted diagnosis methods. In addition, recent studies show that subtypes play an significant role in better characterization of tumor growth and outcome prediction. In this paper, we propose a novel subtype cell detection method with an accelerated deep convolution neural network. The proposed method not only detects cells but also gives subtype cell classification for the detected cells. Based on the subtype cell detection results, we extract subtype cell related features and use them in survival prediction. We demonstrate that our proposed method has excellent subtype cell detection performance and our proposed subtype cell features can achieve more accurate survival prediction.


international symposium on biomedical imaging | 2016

Lung cancer survival prediction from pathological images and genetic data — An integration study

Xinliang Zhu; Jiawen Yao; Xin Luo; Guanghua Xiao; Yang Xie; Adi F. Gazdar; Junzhou Huang

In this paper, we have proposed a framework for lung cancer survival prediction by integrating genetic data and pathological images. Since molecular profiles and pathological images reveal complementary information on tumor characteristics, the integration will benefit the survival analysis. The gene expression signatures are processed using Model-Based Background Correction method. A robust cell detection and segmentation method is applied to segment each individual cell from pathological images to extract the image features. Based on the cell detection results, a set of extensive features are extracted using efficient geometry and texture descriptors. The supervised principal component regression model is fitted to evaluate the proposed framework. Experimental results demonstrate strong prediction power of the statistical model built from the integration of genetic data and pathological images compared with using only one of the two types of data alone.


international conference on machine learning | 2015

Computer-Assisted Diagnosis of Lung Cancer Using Quantitative Topology Features

Jiawen Yao; Dheeraj Ganti; Xin Luo; Guanghua Xiao; Yang Xie; Shirley X. Yan; Junzhou Huang

In this paper, we proposed a computer-aided diagnosis and analysis for a challenging and important clinical case in lung cancer, i.e., differentiation of two subtypes of Non-small cell lung cancer NSCLC. The proposed framework utilized both local and topological features from histopathology images. To extract local features, a robust cell detection and segmentation method is first adopted to segment each individual cell in images. Then a set of extensive local features is extracted using efficient geometry and texture descriptors based on cell detection results. To investigate the effectiveness of topological features, we calculated architectural properties from labeled nuclei centroids. Experimental results from four popular classifiers suggest that the cellular structure is very important and the topological descriptors are representative markers to distinguish between two subtypes of NSCLC.


international conference on multimedia and expo | 2014

Foreground detection using low rank and structured sparsity

Jiawen Yao; Xin Liu; Chun Qi

In this paper, a novel foreground detection method based on two-stage framework is presented. In the first stage, a class of structured sparsity-inducing norms is introduced to model moving objects in videos and thus regard the observed sequence as being made up of the sum of a low-rank matrix and a structured sparse outlier matrix. In virtue of adaptive parameters, the proposed method includes a motion saliency measurement to dynamically estimate the support of the foreground in the second stage. Experiments on challenging datasets demonstrate that the proposed approach outperforms the state-of-the-art methods and works effectively on a wide range of complex videos.


medical image computing and computer assisted intervention | 2017

Deep Correlational Learning for Survival Prediction from Multi-modality Data

Jiawen Yao; Xinliang Zhu; Feiyun Zhu; Junzhou Huang

Technological advances have created a great opportunity to provide multi-view data for patients. However, due to the large discrepancy between different heterogeneous views, traditional survival models are unable to efficiently handle multiple modalities data as well as learn very complex interactions that can affect survival outcomes in various ways. In this paper, we develop a Deep Correlational Survival Model (DeepCorrSurv) for the integration of multi-view data. The proposed network consists of two sub-networks, view-specific and common sub-network. To remove the view discrepancy, the proposed DeepCorrSurv first explicitly maximizes the correlation among the views. Then it transfers feature hierarchies from view commonality and specifically fine-tunes on the survival regression task. Extensive experiments on real lung and brain tumor data sets demonstrated the effectiveness of the proposed DeepCorrSurv model using multiple modalities data across different tumor types.


computer vision and pattern recognition | 2017

WSISA: Making Survival Prediction from Whole Slide Histopathological Images

Xinliang Zhu; Jiawen Yao; Feiyun Zhu; Junzhou Huang

Image-based precision medicine techniques can be used to better treat cancer patients. However, the gigapixel resolution of Whole Slide Histopathological Images (WSIs) makes traditional survival models computationally impossible. These models usually adopt manually labeled discriminative patches from region of interests (ROIs) and are unable to directly learn discriminative patches from WSIs. We argue that only a small set of patches cannot fully represent the patients survival status due to the heterogeneity of tumor. Another challenge is that survival prediction usually comes with insufficient training patient samples. In this paper, we propose an effective Whole Slide Histopathological Images Survival Analysis framework (WSISA) to overcome above challenges. To exploit survival-discriminative patterns from WSIs, we first extract hundreds of patches from each WSI by adaptive sampling and then group these images into different clusters. Then we propose to train an aggregation model to make patient-level predictions based on cluster-level Deep Convolutional Survival (DeepConvSurv) prediction results. Different from existing state-of-the-arts image-based survival models which extract features using some patches from small regions of WSIs, the proposed framework can efficiently exploit and utilize all discriminative patterns in WSIs to predict patients survival status. To the best of our knowledge, this has not been shown before. We apply our method to the survival predictions of glioma and non-small-cell lung cancer using three datasets. Results demonstrate the proposed framework can significantly improve the prediction performance compared with the existing state-of-the-arts survival methods.


bioinformatics and biomedicine | 2016

Imaging-genetic data mapping for clinical outcome prediction via supervised conditional Gaussian graphical model

Xinliang Zhu; Jiawen Yao; Guanghua Xiao; Yang Xie; Jaime Rodriguez-Canales; Edwin R. Parra; Carmen Behrens; Ignacio I. Wistuba; Junzhou Huang

Imaging-genetic data mapping is important for clinical outcome prediction like survival analysis. In this paper, we propose a supervised conditional Gaussian graphical model (SuperCGGM) to uncover survival associated mapping between pathological images and genetic data. The proposed method integrates heterogeneous modal data into the survival model by weighted projection within the data. To obtain a sparse solution, we employ l-1 regularization to the partial log likelihood loss function and propose a cyclic coordinate ascent algorithm to solve it. It also gives a way to bridge the gap between the supervised model with conditional Gaussian graphical model (CGGM). Compared to nine state-of-the-art methods like SuperPCA, CGGM, etc., our method is superior due to its ability of integrating diverse information from heterogeneous modal data in a supervised way. The extensive experiments also show the strong power of SuperCGGM in mapping survival associated image and gene expression signatures.

Collaboration


Dive into the Jiawen Yao's collaboration.

Top Co-Authors

Avatar

Junzhou Huang

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar

Xinliang Zhu

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar

Feiyun Zhu

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar

Sheng Wang

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar

Zheng Xu

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Guanghua Xiao

University of Texas Southwestern Medical Center

View shared research outputs
Top Co-Authors

Avatar

Yang Xie

University of Texas Southwestern Medical Center

View shared research outputs
Top Co-Authors

Avatar

Chun Qi

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge