Medical image analysis | 2021

Deep cross-view co-regularized representation learning for glioma subtype identification

 
 
 
 
 

Abstract


The new subtypes of diffuse gliomas are recognized by the World Health Organization (WHO) on the basis of genotypes, e.g., isocitrate dehydrogenase and chromosome arms 1p/19q, in addition to the histologic phenotype. Glioma subtype identification can provide valid guidances for both risk-benefit assessment and clinical decision. The feature representations of gliomas in magnetic resonance imaging (MRI) have been prevalent for revealing underlying subtype status. However, since gliomas are highly heterogeneous tumors with quite variable imaging phenotypes, learning discriminative feature representations in MRI for gliomas remains challenging. In this paper, we propose a deep cross-view co-regularized representation learning framework for glioma subtype identification, in which view representation learning and multiple constraints are integrated into a unified paradigm. Specifically, we first learn latent view-specific representations based on cross-view images generated from MRI via a bi-directional mapping connecting original imaging space and latent space, and view-correlated regularizer and output-consistent regularizer in the latent space are employed to explore view correlation and derive view consistency, respectively. We further learn view-sharable representations which can explore complementary information of multiple views by projecting the view-specific representations into a holistically shared space and enhancing via adversary learning strategy. Finally, the view-specific and view-sharable representations are incorporated for identifying glioma subtype. Experimental results on multi-site datasets demonstrate the proposed method outperforms several state-of-the-art methods in detection of glioma subtype status.

Volume 73
Pages \n 102160\n
DOI 10.1016/j.media.2021.102160
Language English
Journal Medical image analysis

Full Text