bioRxiv | 2019

Integrating deep and radiomics features in cancer bioimaging

 
 
 
 
 
 
 

Abstract


Almost every clinical specialty will use artificial intelligence in the future. The first area of practical impact is expected to be the rapid and accurate interpretation of image streams such as radiology scans, histo-pathology slides, ophthalmic imaging, and any other bioimaging diagnostic systems, enriched by clinical phenotypes used as outcome labels or additional descriptors. In this study, we introduce a machine learning framework for automatic image interpretation that combines the current pattern recognition approach (“radiomics”) with Deep Learning (DL). As a first application in cancer bioimaging, we apply the framework for prognosis of locoregional recurrence in head and neck squamous cell carcinoma (N=298) from Computed Tomography (CT) and Positron Emission Tomography (PET) imaging. The DL architecture is composed of two parallel cascades of Convolutional Neural Network (CNN) layers merging in a softmax classification layer. The network is first pretrained on head and neck tumor stage diagnosis, then finetuned on the prognostic task by internal transfer learning. In parallel, radiomics features (e.g., shape of the tumor mass, texture and pixels intensity statistics) are derived by predefined feature extractors on the CT/PET pairs. We compare and mix deep learning and radiomics features into a unifying classification pipeline (RADLER), where model selection and evaluation are based on a data analysis plan developed in the MAQC initiative for reproducible biomarkers. On the multimodal CT/PET cancer dataset, the mixed deep learning/radiomics approach is more accurate than using only one feature type, or image mode. Further, RADLER significantly improves over published results on the same data.

Volume None
Pages None
DOI 10.1101/568170
Language English
Journal bioRxiv

Full Text