International journal of radiation oncology, biology, physics | 2021

Enabling Few-View 3D Tomographic Image Reconstruction by Geometry-Informed Deep Learning.

 
 
 
 
 

Abstract


PURPOSE/OBJECTIVE(S)\nDeep learning affords enormous opportunities to augment the armamentarium of biomedical imaging, albeit its design and implementation have potential flaws. Fundamentally, most deep learning models are driven entirely by data without consideration of any physical priors, which dramatically increases the complexity of neural networks and limits the application scope and generalizability of the resultant models. Here we establish a geometry-informed deep learning framework for ultra-sparse tomographic image reconstruction. We introduce a novel mechanism for the integration of geometric priors of the imaging system. We demonstrate that the seamless inclusion of known geometric priors is essential to enhance the performance of volumetric computed tomography imaging with ultra-sparse sampling.\n\n\nMATERIALS/METHODS\nWe propose a geometry-informed deep learning framework for 3D tomographic image reconstruction and illustrate the way that the known imaging geometry is integrated into the dual-domain deep learning. Specifically, the proposed framework consists of three modules: a) 2D projection generation network is developed to learn to generate novel-view projections from the given sparse views; b) geometric back-projection operator transforms the 2D projections to 3D images, referred to as geometry preserving images (GPI), which geometrically relates the pixelated 2D input data to the corresponding ray lines in 3D space; and c) 3D image refinement network learns to refine the GPI to generate the final 3D images. To evaluate feasibility of the proposed approach, we conduct experiments on a dataset containing 1018 lung CT images, where 80% and 20% data are used for training and testing. The projection images are digitally produced from CT images using geometry consistent with a clinical on-board cone-beam CT system for radiation therapy.\n\n\nRESULTS\nWe deploy the trained model on the held-out testing set for few-view 3D image reconstruction. The reconstructed results are compared with ground truth qualitatively and quantitatively. For single-/two-/three-view reconstruction, the average NRMSE / SSIM / PSNR values over all testing data are 0.368 / 0.734 / 20.770, 0.300 / 0.807 / 22.687 and 0.274 / 0.838 / 23.669, respectively. By visualizing the reconstructed CT images, we observe the proposed model can generate images closely to the targets although the anatomic structures of different patients have a large variance, indicating the potential of the proposed model for volumetric imaging even with few views. Moreover, experiments also show the proposed model can also be generalized to multi-view 3D image reconstruction and outperform deep models without geometry priors.\n\n\nCONCLUSION\nWe present a novel geometry-integrated deep learning model for volumetric imaging with ultra-sparse sampling. The study opens new avenues for data-driven biomedical imaging and promises to provide substantially improved imaging tools for various clinical imaging and image-guided interventions.

Volume 111 3S
Pages \n e118\n
DOI 10.1016/j.ijrobp.2021.07.533
Language English
Journal International journal of radiation oncology, biology, physics

Full Text