Physics in medicine and biology | 2021

Hierarchical-order multimodal interaction fusion network for grading gliomas.

 
 
 
 

Abstract


Gliomas are the most common type of primary brain tumors and have different grades. Accurate grading of a glioma is therefore significant for its clinical treatment planning and prognostic assessment with multiple-modality magnetic resonance imaging (MRI). In this study, we developed a noninvasive deep-learning method based on multimodal MRI for grading gliomas by focusing on effective multimodal fusion via leveraging collaborative and diverse high-order statistical information. Specifically, a novel high-order multimodal interaction module was designed to promote interactive learning of multimodal knowledge for more efficient fusion. For more powerful feature expression and feature correlation learning, the high-order attention mechanism is embedded in the interaction module for modeling complex and high-order statistical information to enhance the classification capability of the network. Moreover, we applied increasing orders at different levels to hierarchically recalibrate each modality stream through diverse-order attention statistics, thus encouraging all-sided attention knowledge with lesser parameters. To evaluate the effectiveness of the proposed scheme, extensive experiments were conducted on The Cancer Imaging Archive (TCIA) and Multimodal Brain Tumor Image Segmentation Benchmark 2017 (BraTS2017) datasets with five-fold cross validation to demonstrate that the proposed method can achieve high prediction performance, with area under the receiver operating characteristic curve, accuracy, sensitivity, and specificity values of 95.2%, 94.28%, 95.24%, and 92.00% on the BraTS2017 and 93.50%, 92.86%, 97.14%, and 90.48% on TCIA datasets, respectively.

Volume None
Pages None
DOI 10.1088/1361-6560/ac30a1
Language English
Journal Physics in medicine and biology

Full Text