International journal of radiation oncology, biology, physics | 2021

Novel-View X-Ray Projection Synthesis Through Geometry-Integrated Deep Learning.

 
 
 
 
 

Abstract


PURPOSE/OBJECTIVE(S)\nX-ray imaging is a widely used approach to view the internal structure of a subject for clinical diagnosis, image-guided interventions and decision-making. The X-ray projections acquired at different view angles provide complementary information of patient s anatomy and are required for stereoscopic or volumetric imaging of the subject. In reality, obtaining multiple-view projections inevitably increases radiation dose and complicates clinical workflow. Here we investigate a strategy of obtaining the X-ray projection image at a novel view angle from a given projection image at a specific view angle to alleviate the need for actual projection measurement. Specifically, we propose a deep learning-based geometry-integrated projection synthesis framework for generating novel-view X-ray projections.\n\n\nMATERIALS/METHODS\nWe for the first time investigate the novel-view projection synthesis problem for X-ray imaging. We propose a deep learning-based geometry-integrated projection synthesis model to generate novel-view X-ray projections through feature disentanglement and geometry transformation. The proposed deep learning model extracts geometry and texture features from a source-view projection, and then conducts geometry transformation on the geometry features to accommodate the change of view angle. Then the X-ray projection in target view is synthesized from the transformed geometry and shared texture features via an image generator. We validate the proposed approach by experimenting on one-to-one and multi-to-multi X-ray projection synthesis using lung imaging cases across various patients. We conduct experiments on a public dataset containing 1018 lung CT images, where 80% and 20% data are used for training and testing. The projection images are digitally produced from CT images using geometry consistent with a clinical on-board cone-beam CT system for radiation therapy.\n\n\nRESULTS\nWe deploy the trained model on the held-out testing set for novel-view projection synthesis. The generated results are compared with ground truth qualitatively and quantitatively. For AP-to-lateral and lateral-to-AP view synthesis, average NRMSE / SSIM / PSNR values over all testing data are 0.185 / 0.880 / 22.646 and 0.173 / 0.911 / 23.908, respectively. Promising results are also obtained for multi-to-multi view synthesis. By visualizing the generated projections, we observe the proposed model can generate images closely to the targets despite the various anatomic structures of different patients, indicating the potential of the proposed model for X-ray projection synthesis.\n\n\nCONCLUSION\nThis work investigates the synthesis of novel-view X-ray projections and presents a robust model combining the deep learning and geometry transformation. It is shown that the generated X-ray projections reveal the internal anatomy distribution from new viewpoints, which potentially provides a new paradigm for volumetric imaging with substantially reduced efforts in data acquisition.

Volume 111 3S
Pages \n e118-e119\n
DOI 10.1016/j.ijrobp.2021.07.534
Language English
Journal International journal of radiation oncology, biology, physics

Full Text