Deep learning-based synthetic-CT generation in radiotherapy and PET: a review
Maria Francesca Spadea, Matteo Maspero, Paolo Zaffino, Joao Seco
DDeep learning-based synthetic-CT generation inradiotherapy and PET: a review
Maria Francesca Spadea , ∗ , Matteo Maspero , , ∗ , Paolo Zaffino , andJoao Seco , Department of Clinical and Experimental Medicine, University “Magna Graecia” of Catanzaro,88100 Catanzaro, Italy Department of Radiotherapy, Division of Imaging & Oncology, University Medical CenterUtrecht, Heidelberglaan 100, 3508 GA Utrecht, The Netherlands Computational Imaging Group for MR diagnostics & therapy, Center for Image Sciences,University Medical Center Utrecht, Heidelberglaan 100, 3508 GA Utrecht, The Netherlands DKFZ German Cancer Research Center, Division of Biomedical Physics in RadiationOncology, 69120 Heidelberg, Germany Department of Physics and Astronomy, Heidelberg University, 69120 Heidelberg, Germany ∗ These authors equally contributed.Version typeset: February 5, 2021
Abstract
Recently, deep learning (DL)-based methods for the generation of synthetic Com-puted Tomography (sCT) have received significant research attention as an alternativeto classical ones. We present here a systematic review of these methods by group-ing them into three categories, according to their clinical applications: I) to replaceCT in magnetic resonance (MR)-based treatment planning, II) facilitate Cone-BeamComputed Tomography (CBCT)-based image guided adaptive radiotherapy, and III)derive attenuation maps for the correction of Positron Emission Tomography (PET).Appropriate database searching was performed on journal articles published betweenJanuary 2014 and December 2020.The key characteristics of the DL methods were extracted from each eligible study anda comprehensive comparison among network architectures and metrics was reported.A detailed review of each category was given, highlighting essential contributions, iden-tifying specific challenges and also summarising the achievements. Lastly, the statisticsof all the cited works from various aspects were analysed, revealing the popularity andfuture trends and the potential of DL-based sCT generation. The current status of DL-based sCT generation was evaluated assessing the clinical readiness of the presentedmethods.
Authors to whom correspondence should be addressed. Email: [email protected] i a r X i v : . [ phy s i c s . m e d - ph ] F e b eep learning-based sCT generation in RT and PET February 5, 2021 page 1 I . Introduction The impact of medical imaging in the diagnosis and therapy of oncological patients hasgrown significantly over the last decades . Especially in radiotherapy (RT) , imaging playsa crucial role in the entire workflow, from treatment simulation to patient positioning andmonitoring .Traditionally, computed tomography (CT) is considered the primary imaging modality inRT, since it provides accurate and high-resolution patient’s geometry, enabling direct elec-tron density conversion that is needed for dose calculations . X-ray-based imaging, includingplanar imaging and cone-beam computed tomography (CBCT), are widely adopted for pa-tient positioning and monitoring before, during or after the dose delivery . Along with CT,functional and metabolic information, mainly derived from positron emission tomography(PET), is commonly acquired, allowing tumour staging and improving tumour contouring .Magnetic resonance imaging (MRI) has also proved its added value for tumours and organs-at-risk (OAR) delineation thanks to its superb soft tissue contrast .To benefit from the complementary advantages offered by different imaging modalities, MRIis generally registered to CT . However, residual misregistration and differences in patientset-up may introduce systematic errors that would affect the accuracy of the whole treat-ment .Recently, MR-only based RT has been proposed to eliminate residual registrationerrors. Furthermore, it can simplify and speed-up the workflow, decreasing patient’s ex-posure to ionising radiation, which is particularly relevant for repeated simulations orfragile populations, e.g. children. Also, MR-only RT may reduce overall treatment costs and workload . Additionally, the development of MR-only techniques can be beneficial forMRI-guided RT .The main obstacle regarding the introduction of MR-only radiotherapy is the lack oftissue attenuation information, required for accurate dose calculations . Many meth-ods have been proposed to convert MR to CT-equivalent representations, often known assynthetic CT (sCT), for treatment planning and dose calculation. These approaches aresummarised in two specific reviews on this topic or in broader reviews about MR-onlyradiotherapy and proton therapy .Additionally, similar techniques to derive sCT from a different imaging modality have Last edited Date:February 5, 2021 age 2 Spadea MF & Maspero M et al been envisioned to improve the quality of CBCT . Cone-beam computed tomographyplays a vital role in image-guided adaptive radiation therapy (IGART), for photon andproton therapy. However, due to the severe scatter noise and truncated projections, imagereconstruction is affected by several artefacts, such as shading, streaking and cupping .For this reason, daily CBCT has not commonly been used for online plan adaption. Theconversion of CBCT-to-CT would allow accurate dose computation and improve the qualityof IGART provided to the patients.Finally, sCT estimation is also crucial for PET attenuation correction. Accurate PETquantification requires a reliable photon attenuation correction (AC) map, usually derivedfrom CT. In the new PET/MRI hybrid scanners, this step is not immediate, and MRI tosCT translation has been proposed to solve the MR attenuation correction (MRAC) issue.Besides, standalone PET scanner can benefit from the derivation of sCT from uncorrectedPET. .In the last years, the derivation of sCT from MR, PET or CBCT has raised increasinginterest based on artificial intelligence algorithms such as machine learning or deep learning(DL) . This paper aims to perform a systematic review and summarise the latest devel-opments, challenges and trends in DL-based sCT generation methods. Deep learning is abranch of machine learning, which is a field of artificial intelligence, that involves using neu-ral networks to generate hierarchical representations of the input data to learn a specific taskwithout the need for hand-engineered features . Recent reviews have discussed the appli-cation of deep learning in radiotherapy , and in PET attenuation correction .Convolutional neural networks (CNNs), which are the most successful type of models forimage processing , have been proposed for sCT generation since 2016 , with a rapidlyincreasing number of published papers on the topic. However, DL-based sCT generationhas not been reviewed in details, except for applications in PET . With this survey, weaim at summarising the latest developments in DL-based sCT generation highlighting thecontributions based on the applications and providing detailed statistics discussing trendsin terms of imaging protocols, DL architectures, and performance achieved. Finally, theclinically readiness of the reviewed methods will be discussed. I. INTRODUCTIONeep learning-based sCT generation in RT and PET February 5, 2021 page 3 II . Material and Methods A systematic review of techniques was carried out using the PRISMA guidelines. PubMed,Scopus and Web of Science databases were searched from January 2014 to December 2020 us-ing defined criteria (for more details see Appendix
VII .). Studies related to radiation therapy,either with photons or protons and attenuation correction for PET were included when deal-ing with sCT generation from MRI, CBCT or PET. This review considered external beamradiation therapy, excluding, therefore, investigations that are focusing on brachytherapy.Conversion methods based on basic machine learning techniques were not considered in thisreview, preferring only deep learning-based approaches. Also, the generation of dual-energyCT was not considered along with the direct estimation of corrected attenuation maps fromPET. Finally, conference proceedings were excluded: proceedings can contain valid method-ologies; however, the large number of relevant abstracts and incomplete report of informationwas considered not suitable for this review. After the database search, duplicated articleswere removed and records screened for eligibility. A citation search of the identified articleswas performed.Each included study was assigned to a clinical application category. The selected cate-gories were:
I MR-only RT ; II CBCT-to-CT for image-guided (adaptive) radiotherapy ; III PET attenuation correction .For each category, an overview of the methods was constructed in the form of tables .The tables were constructed, capturing salient information of DL-based sCT generationapproaches, which has been schematically depicted in Figure 1.Independent of the input image (MRI, CBCT or PET) the chosen architecture (CNN) canbe trained with paired on unpaired input data and different configurations. In this review,we define the following configurations: 2D (single slice, 2D, or patch, 2Dp) when trainingwas performed considering transverse (tra), sagittal (sag) or coronal (cor) images; 2D+ The tables presented in this review have been made publicly accessible at https://matteomaspero.github.io/overview_sct . Last edited Date:February 5, 2021 age 4 Spadea MF & Maspero M et al
CTPET
EvaluationTargetInput
CBCT
Output image-basedtask-specific
MRI sCT
CNN configurations? multi-2D s a g tracor + Figure 1:
Schematic of deep learning-based sCT generation study.
The input im-ages/volumes, either being MRI (green), CBCT (yellow) or PET (red) are converted by aconvolutional neural network (CNN) into sCT. The CNN is trained to generate sCT similarto the target CT (blue). Several choices can be made in terms of network architecture, con-figuration, data pairing. After the sCT generation, the output image/volume is evaluatedwith image- and task-specific metrics.when independently trained 2D networks in different views were combined during of afterinference; multi-2D (m2D, also known as multi-plane) when slices from different views, e.g.transverse, sagittal and coronal, were provided to the same network; 2.5D when trainingwas performed with neighbouring slices which were provided to multiple input channels ofone network; 3D when volumes were considered as input (the whole volume, 3D, or patches,3Dp). The architectures generally considered are introduced in the next section ( II . A .). ThesCTs are generated inferring on an independent test set the trained network or combiningan ensemble (ens) of trained networks. Finally, the quality of the sCT can be evaluated withimage-based or task-specific metrics ( II . B .).For each of the sCT generation category, we compiled tables providing a summary II. MATERIAL AND METHODSeep learning-based sCT generation in RT and PET February 5, 2021 page 5 of the published techniques, including the key findings of each study and other pertinentfactors, here indicated: the anatomic site investigated; the number of patients included;relevant information about the imaging protocol; DL architecture, the configuration chosento sample the patient volume (2D or 2D+ or m2D, 2.5D or 3D); using paired/unpaired dataduring the network training; the radiation treatment adopted, where appropriate, along withthe most popular metrics used to evaluate the quality of sCT (see II . B .).The year of publication for each category was noted according to the date of first onlineappearance. Statistics in terms of popularity of the mentioned fields were calculated with piecharts for each category. Specifically, we subdivided the papers according to the anatomicalregion they dealt with: abdomen, brain, head & neck (H&N), thorax, pelvis and whole body;where available, tumour site was also reported. A discussion of the clinical feasibility of eachmethodology and observed trends follows.The most common network architecture and metrics will be introduced in the nextsections to facilitate the tables’ interpretation. II . A . Deep learning for image synthesis Medical image synthesis can be formulated as an image-to-image translation problem, wherea model that maps input image (A) to a target image (B) has to be found . Among allthe possible strategies, DL methods have dramatically improved state of the art . DLapproaches mostly used to synthesise sCT belong to the class of CNNs, where convolutionalfilters are combined through weights (also called parameters) learned during training. Thedepth is provided by using multiple layers of filters . The training is regulated by findingthe ”optimal” model parameters according to the search criterion defined by a loss function( L ). Many CNN-based architectures have been proposed for image synthesis, with themost popular being the U-nets and generative adversarial networks (GANs) (see figure2). U-net presents an encoding and a decoding path with additional skip connections toextract and reconstruct image features, thus learning to go from A to B. In the most simpleGAN architecture, two networks are competing: a generator (G) that is trained to obtainsynthetic images (B (cid:48) ) similar to the input set ( L G ), and a discriminator (D) that is trainedto classify whether B (cid:48) is real or fake ( L D ) improving G’s performances. GANs learn a lossthat combines both the tasks resulting in realistic images . Given these premises, many Last edited Date:February 5, 2021
II.A. Deep learning for image synthesisage 6 Spadea MF & Maspero M et al variants of GANs can be arranged, with U-net being employed as a possible generator in theGAN framework. We will not detail all possible configurations since it is not the scope ofthis review, and we address the interested reader to . A particular derivation of GAN,called cycle-consistent GAN (cycle-GAN), is worth mentioning. Cycle-GANs opened the eraof unpaired image-to-image translation . Here, two GANs are trained, one going from Ato B (cid:48) , called forward pass (forw) and the second going from B (cid:48) to A, called backwards pass(back) are adopted with their related loss terms (Figure 2 bottom right). Two consistencylosses L c are introduced, aiming at minimising the difference between A and A (cid:48) and B andB (cid:48) , enabling unpaired training. ℒ GAN
CNN AB B’ U-Net AB B’ GAN
GAB B’ D ℒ ℒℒ G Cycle-GAN
GAN forw A B’A’ GAN back ℒ D ℒ cycle B ℒ forw ℒ back ℒ cycle Figure 2:
Deep learning architectures used for image-to-image translation . In themost straightforward configurations (CNN and U-Net, top left and right, respectively), asingle loss function between input and output images is computed. GANs (bottom) usemore than one CNN and loss to train the generator’s performance (G). Cycle-GANs enableunsupervised learning by employing multiple GANs and cycle-consistency losses ( L cycle ). II. MATERIAL AND METHODS II.A. Deep learning for image synthesiseep learning-based sCT generation in RT and PET February 5, 2021 page 7 II . B . Metrics An overview of the metrics used to assess and compare the reviewed publications’ perfor-mances is summarised in Table 1.Table 1:
Overview of the most popular metrics reported in the literature subdivi-ded in image similarity, geometric accuracy, task-specific metrics, and their category.
Category MetricImagesimilarity
MAE = (cid:80) n | CT i − sCT i | n , with n =voxel number in ROI; RMSE = √ (cid:80) n ( CT i − sCT i ) n PSNR = 10 log ( MAX CT MSE )SSIM = (2 µ sCT µ CT + c )(2 σ sCT,rCT + c )( µ sCT + µ CT + c )( σ sCT + σ CT + c ) with c = ( k L ) , c = ( k L ) µ = mean, σ = variance/covariance L = dynamic range, k = 0 .
01 and k = 0 . Geometryaccuracy
DSC(Seg CT , Seg sCT ) = 2 Seg sCT ∩ Seg CT Seg sCT + Seg CT Taskspecific MR-only & DD = 100 · D sCT − D CT D CT , with D =dose; CBCT-to-CT
DPR = % of voxel with
DD > x % in a ROIGPR=% of voxel with γ >
PETreconstruction
PET err = 100 · PET sCT − PET CT PET CT %PET | err | = 100 · | PET sCT − PET CT | PET CT % Image similarity
The most straightforward way to evaluate the quality of the sCTis to calculate the similarity of the sCT to the ground truth/target CT. The calculation ofvoxel-based image similarity metrics implies that sCT and CT are aligned by translation,rigid (rig), affine (aff) or deformable (def) registrations. Most common similarity metrics arereported in Table 1 and include: mean absolute error (MAE), sometimes referred as meanabsolute prediction error, peak signal-to-noise ratio (PSNR) and structural similarity indexmeasure (SSIM). Other less common metrics are the cross-correlation (CC) and normalised
Last edited Date:February 5, 2021
II.B. Metricsage 8 Spadea MF & Maspero M et al cross-correlation (NCC), along with the (root) mean squared error ((R)MSE).
Geometric accuracy
Along with voxel-based metrics, the geometric accuracy of thegenerated sCT can be also assessed; in this context, using binary masks can facilitate sucha task. For example, dice similarity coefficient (DSC) is a widespread metric that assessesthe accuracy of depicting specific tissue classes/structures, e.g. bones, fat, muscle, air andbody. In this context, DCS is calculated after having applied a threshold to CT and sCT,and, if necessary, morphological operations on the binary masks. Other image-based metricscan be subdivided according to the application, and it will be presented in the appropriatesub-category in the following sections.
Task-specific metrics
Additionally, task-specific metrics can be considered. For ex-ample, in the case of MR-only RT and CBCT-to-CT for adaptive RT, the accuracy of dosecalculation on sCT is generally compared to CT-based dose through dose difference (DD),dose pass rate (DPR), γ analysis via gamma pass rate (GPR) and, in the case of protonRT, range shift (RS) analysis . Also, the differences among clinically relevant dose-volumehistogram (DVH) points are often reported. Dose calculations are either performed for pho-ton (x) and proton ( p ) RT. For sCT for PET attenuation correction, the absolute and relativeerror of the PET reconstruction (PET err and PET | err | , respectively) are generally reportedalong with the difference in standard uptake values (SUV).Please note that differences could occur in the region-of-interest (ROI) where the metricsare calculated. For example, MAE can be computed on the whole predicted volume, in avolume of interest (VOI) or cropped volume. In addition to that, the implementation of themetric computation can change. For example, γ , mm ( γ ), γ , mm ( γ ) and γ , mm ( γ )can be calculated on different dose thresholds and with 2D or 3D algorithms, or values arechosen to threshold the CT/sCT for DSC may vary among the literature. In the followingsections, we will highlight the possible differences speculating on the impact. III . Results
Database searching led to 91 records on PubMed, 98 on Scopus and 218 on Web of Science.After duplicates removal and content check, 83 eligible papers were found.Figure 3 summarises the number of articles published by year, grouped in 51 (61 . III. RESULTSeep learning-based sCT generation in RT and PET February 5, 2021 page 9 (18 . . . Given that we excluded conference papers from oursearch, we found that the first work was published in 2017 and, in general, the number ofarticles increased over the years, except for CBCT-to-CT and sCT for PET attenuation cor-rection, which was stable in the last years. Figure 3 shows that brain, pelvis and H&N werethe most popular anatomical regions investigated in deep learning-based sCT for MR-onlyRT, covering ∼
80% of the studies. For CBCT-to-CT, H&N and pelvic regions were themost popular, being investigated in >
75% of the studies. Finally, for PET AC H&N wasinvestigated in most of the studies followed by the pelvic region covering together >
75% ofthe publications.The total number of patients included in the studies was variable, but most studies dealtwith <
50 patients for all three categories. The most extensive three studies included 402 (I), 328 (II) and 193 patients (I), while the smallest studies included ten patients andanother ten volunteers (I).Most papers included adult patients. Paediatric (paed) patients represent a more het-erogeneous dataset for network training and its feasibility has been investigated first forattenuation correction in PET (79 patients) and more recently for photon and protonRT .All the models were trained to perform a regression task from the input to sCT, exceptfor two studies where networks were trained to segment the input image into a pre-definednumber of classes, performing a segmentation task .In most of the works, training was performed in a paired manner, with unpaired traininginvestigated in 13/83 articles. Four studies compared paired against unpaired . The2D networks were the most common over the three categories, being adopted about 61% ofthe times, 2D+ 6%, 2.5D 10%, and 3D configuration 24%. In some studies, multiple con-figurations were investigated for example . GANs were the most popular architectures(45-times), followed by U-nets (36) and other CNNs. Note that the GAN generator a U-netmay be employed, but this counted as GAN.All the investigations employed registration between sCT and CT to evaluate the quality Last edited Date:February 5, 2021 age 10 Spadea MF & Maspero M et al N u m b e r o f a r t i c l e s MR-only RT CBCT to CT sCT for PET AC C B C T -t o - C T M R - o n l y R T Anatomical regions Total number of patients P ET a tt e nu a t i o n c o rrec t i o n Figure 3: (Top) Number of published articles grouped by application and year; (middle) piecharts of the anatomical regions investigated for each application; (bottom) bar plot of thepublications binned per the total number of patients included in the study.
III. RESULTSeep learning-based sCT generation in RT and PET February 5, 2021 page 11 of the sCT, except for Xu et al. and Fetty et al. , where metrics were defined to assessthe quality of the sCT in an unpaired manner, e.g. Frechet inception distance (FID).Main findings are reported in Table 2 for studies on sCT for MR-only RT withoutdosimetric evaluations, in Table 3a, 3b for studies on sCT for MR-only RT with dosimetricevaluations, in Table 4 for studies on CBCT-to-CT for IGART, and in Table 5 for studieson PET attenuation correction. Tables are organised by anatomical site and tumour loca-tion where available. Studies investigating the independent training and testing of severalanatomical regions are reported for each specific site . Studies using the same net-work to train or test data from different scanners and anatomy are reported at the bottom ofthe table . Detailed results based on these tables are presented in the following sectionssubdivided for each category. III . A . MR-only radiotherapy The first work ever published in this category, and in among all the categories, was byHan in 2017, where he proposed to use a paired U-net for brain sCT generation. After oneyear, the first work published with a dosimetric evaluation was presented by Maspero etal. investigating a 2D paired GAN trained on prostate patients and evaluated on prostate,rectal and cervical cancer patients.Considering the imaging protocol, we can observe that most of the MRI were acquiredat 1.5 T (51.9%), followed by 3 T (42.6%), and the remaining 6.5% at 1 T or 0.35/0.3 T.The most popular MRI sequences adopted depends on the anatomical site: T1 gradientrecalled-echo (T1 GRE) for abdomen and brain; T2 turbo spin-echo (TSE) for pelvis andH&N. Unfortunately, for more than ten studies either sequence or magnetic field were notadequately reported.Generally, a single MRI sequence is used as input. However, eight studies investigated usingmultiple input sequences or Dixon reconstructions based on the assump-tion that more input contrast may facilitate sCT generation. Some studies compared theperformance of sCT generation depending on the sequence acquired. For example, Massa etal. compared sCT from the most adopted MRI sequences in the brain, e.g. T1 GRE with(+Gd) and without Gadolinium (-Gd), T2 SE and T2 fluid-attenuated inversion recovery(FLAIR), obtaining lowest MAE and highest PSNR for T1 GRE sequences with Gadolin- Last edited Date:February 5, 2021
III.A. MR-only radiotherapyage 12 Spadea MF & Maspero M et al T a b l e : O v e r v i e w s C T m e t h o d s f o r M R - o n l y r a d i o t h e r a p y w i t h s o l e i m a g e - b a s e d e v a l u a t i o n . T u m o r P a t i e n t s M R I D L m e t h o d R e g I m a g e - s i m il a r i t y R e f e r e n c e s i t e t r a i n v a l t e s t x - f o l dfi e l d s e q u e n c e c o n f a r c h M A E P S N R SS I M o t h e r s [ T ][ HU ][ d B ] Abd A bd o m e n v L o O n . a . m D i x o n D p a i r G AN ∗ d e f ± CC X u A bd o m e n L o O n . a . n . a . D p a i r G AN ∗ r i g5 . ± . . ± . ( F / M ) S I M I S ... X u Brain B r a i n x . D T G R E D p a i r U - n e tr i g85 ± M S E , M E H a n B r a i n L o O n . a . T . D pp a i r C NN + r i g85 ± . ± . X i a n g2018 B r a i n x . T G d D p a i r C NN G AN d e f ± ±
10 25 . ± . . ± . . ± . . ± . t i ss u e s E m a m i B r a i n CT M R D T D p a i r / unp ∗ G AN a ff ± . ± . . ± . J i n B r a i n L o O n . a . T D pp a i r G AN ∗ r i g56 ± . ± . N CC , H D b o d y L e i B r a i n L o O n . a . T b D unp G AN ∗ N o9 . ± . . ± . ( F / M ) S I M I S ... X u B r a i n t . n . a . D p a i r ∗ G AN ∗ a ff ± . ± . . ± . Y a n g2020 x . D T G R E D p a i r U - n e t a ff . ± . . ± . . ± . m e tr i c s f o r a i r M a ss a2020 B r a i n D T G R E G d . ± . . ± . . ± . i r , b o n e s , D T S E . ± . . ± . . ± . s o f tt i ss u e s ; D T F L A I R . ± . . ± . . ± . D S C b o n e s B r a i n . T D p a i r U - n e tr i g65 ± . ± . . ± . s a m e m e tr i c s f o r L i D unp G AN ± . ± . . ± . s y n t h e t i c M R I Head & Neck N a s o ph a r . T D p a i r U - n e t d e f ± M A E M E t i ss u e / b o n e W a n g2019 H & N x . D T ± G d , T D p a i r G AN a ff ± . ± . . ± . D S C M A E b o n e T i e H & N T D unp G AN n . a . . ± . . ± . . ± . K e a r n e y H & N L o O . D T , T D p a i r G AN d e f ± M E L a r g e n t H & N L o O . D T , T D p a i r G AN ∗ d e f - R M S E , CC Q i a n H & N x D U T E D p a i r U - n e t d e f ± D S C , s p a t i a l c o rr Su Pelvis P r o s t a t e L o O n . a . T . D pp a i r C NN + r i g43 ± . ± . X i a n g2018 P e l v i s L o O n . a . D T D pp a i r G AN ∗ r i g51 ± . ± . N CC , H D b o d y L e i P r o s t a t e x . D T T S E D p a i r D pp a i r U - n e t d e f ± ± D S C b o n e F u P e l v i s hu m a n x D T G R E D p U - n e t d e f ± . ± . M A E / D S C b o n e F l o r k o w P e l v i s c a n i n e . m D i x o n a p a i r ± . ± . s u r f d i s t < . mm P e l v i s x D T D p a i r C NNU - n e t d e f ± ± . ± . . ± . . ± . . ± . M E , P CC B a h r a m i P e l v i s D T F S E D unp G ANN o F I D F e tt y Thor B r e a s t L o O n . a . n . a . D p a i r U - n e t d e f D S C . - . J e o n v v o l un t ee r s , n o t p a t i e n t s ; t o s e g m e n t CT i n t o5 - c l a ss e s ; a m u l t i p l ec o m b i n a t i o n s o f D i x o n i m ag e s w a s i n v e s t i ga t e dbu t o m i tt e dh e r e ; b d a t a s e t f r o m ; t r o bu s t n e ss t o t r a i n i n g s i ze w a s i n v e s t i ga t e d . A bb r e v i a t i o n s : v a l = v a li d a t i o n , x - f o l d = c r o ss - f o l d , c o n f = c o nfi g u r a t i o n , a r c h = a r c h i t ec t u r e , G R E = g r a d i e n t ec h o , ( T ) S E = (t u r b o ) s p i n - ec h o , m D i x o n = m u l t i - c o n t r a s t D i x o n r ec o n s t r u c t i o n , L o O= l e a v e - o n e - o u t , ( R ) M S E = ( r oo t) m e a ss q u a r e d e rr o r , M E = m e a n e rr o r , D S C = d i ce s c o r ec o e ffi c i e n t , ( N ) CC = n o r m a li ze d c r o ss c o rr e l a t i o n . III. RESULTS III.A. MR-only radiotherapyeep learning-based sCT generation in RT and PET February 5, 2021 page 13 T a b l e : a . O v e r v i e w s C T m e t h o d s f o r M R - o n l y r a d i o t h e r a p y w i t h i m a g e - b a s e d a ndd o s ee v a l u a t i o n . T u m o r P a t i e n t s M R I D L m e t h o d R e g I m a g e - s i m il a r i t y P l a n D o s e R e f e r e n c e s i t e t r a i n v a l t e s t x - fi e l d s e q u e n c e c o n f a r c h M A E P S N R o t h e r s DD G P R D V H o t h e r s f o l d [ T ][ HU ][ d B ][ % ][ % ] Abdomen L i v e r L o O D T D p G AN d e f ± . ± . N CC p . ± . < % r a n g e L i u Y G R E p a i r γ γ A bd o m e n x . G R E D p a i r G AN ∗ d e f ± . ± . x < ± . . ± . < ± . γ F u . D unp ± . ± . + B < ± . . ± . A bd o m e n x D T . D p a i r U - n e t s y n ± M A E M E x < G y L i u G R E r i go r ga n s A bd o m e n . G R E D p a i r U - n e t d e f ± M E x + B < . . ± . < . % γ γ C u s u m a n o2020 A bd o m e n x . D T D p U - n e t d e f ± . ± . M E , D S C x < . . ± . < % b e a m F l o r k o w p a e d G R E , T T S E p a i rt i ss u e s p < . . ± . < % d e p t h Brain B r a i n x . D T m D + p a i r C NN r i g67 ± M E t i ss u e s x - . ± . . ± . b e a m γ D i n k l a2018 G R E D S C d i s t b o d y d e p t h γ B r a i n . D T D p a i r C NN d e f ± D S C x < . ± . . L i u F G R E G d B r a i n x . D T D p a i r G AN r i g47 ± e a c h x - . ± . . ± . < % D /3 D K a ze m i f a r S E G d f o l d γ γ B r a i n . D T D p a i r U - n e tr i g116 ± M E x > , ± r a n g e N e pp l G R E D pp a i r ± p > , ± γ B r a i n . D T D p G AN r i g55 ± M E x < . ± . < . % r a n g e Sh a f a i G R E p a i r D S C γ γ B r a i n x T D p a i r U - n e tr i g81 ± M E a i r , x . ± . li g n G up t a2019 t i ss u e s C B CT B r a i n L o O D T D + p a i r U - n e tr i g54 ± M E , D S C p . ± . r a n g e Sp a d e a2019 G R E t i ss u e s B r a i n x n . a . T , T D pp a i r G AN d e f ± t i ss u e s x . . ± . < % b e a m γ K o i k e F L A I R c d e p t h γ B r a i n t , m x . D T D + ∗ p a i r G AN ∗ r i g61 ± . ± . M E D S C x - . ± . . ± . < % b e a m M a s p e r o2020 p a e d G R E ± G dSS I M p . ± . . ± . < % d e p t h γ B r a i n x . D T D unp G AN r i g78 ± p . ± . . ± . < % b e a m γ K a ze m i f a r S E G dd e p t h γ B r a i n m , t D T D p C NN d e f ± t i ss u e s x . ± . . ± . < ± . γ A nd r e s . G R E ± G dp a i r U - n e t ± . ± . . ± . ∗ c o m p a r i s o n w i t h o t h e r a r c h i t ec t u r e h a s b ee np r o v i d e d γ % , mm = γ , γ % , mm = γ , γ % , mm = γ ; + t r a i n e d i n D o n m u l t i p l e v i e w a nd agg r e ga t e d a f t e r i n f e r e n ce t r o bu s t n e ss t o t r a i n i n g s i ze w a s i n v e s t i ga t e d c m u l t i p l ec o m b i n a t i o n s ( a l s o ± D i x o n r ec o n s t r u c t i o n , w h e r e p r e s e n t) o f t h e s e q u e n ce s w e r e i n v e s t i ga t e dbu t o m i tt e d ; m d a t a f r o mm u l t i p l ece n t e r s Last edited Date:February 5, 2021
III.A. MR-only radiotherapyage 14 Spadea MF & Maspero M et al T a b l e b : O v e r v i e w s C T m e t h o d s f o r M R - o n l y r a d i o t h e r a p y w i t h i m a g e - b a s e d a ndd o s ee v a l u a t i o n . T u m o r P a t i e n t s M R I D L m e t h o d R e g I m a g e - s i m il a r i t y P l a n D o s e R e f e r e n c e s i t e t r a i n v a l t e s t x - fi e l d s e q u e n c e c o n f a r c h M A E P S N R o t h e r s DD G P R D V H o t h e r s f o l d [ T ][ HU ][ d B ][ % ][ % ] Pelvis P r o s t a t e T T S E D p a i r U - n e t d e f ± M E t i ss u e s x . ± . . < . G y γ γ C h e n P r o s t a t e x D T D p a i r U - n e t d e f ± M E D S C d i s t b o d yx - . ± . . ± . < % γ γ A r a b i P r o s t a t e L o O . T D p G AN ∗ r i g51 ± . ± . N CC , b o n e : p - . ± . ± < % r a n g e , γ L i u Y b unpd i s t , un i f o r m p e a k , γ P r o s t a t e x D T D p a i r U - n e t ∗ d e f ± t i ss u e s x < % . ± < % L a r g e n t T S E G AN ∗ ± M E < % . ± P e l v i s m T D p a i r G AN ∗ d e f ± M E x . ± . . ± . < . % B o n i . T S E o r ga n s P e l v i s + m . D T . D p a i r G AN ∗ d e f ± . ± M E M S E x < ± < . % F e tt y . b o n e P e l v i s . G R E D p a i r U - n e t d e f ± t i ss u e s x + B < . . ± . < % γ γ C u s u m a n o2020 R ec t u m m . D T D p a i r G AN d e f ± M E x < ± . . ± . < % γ γ B i r d b o n e Head & Neck H & N x . D T D p U - n e t d e f ± M E x - . ± . . ± . γ D i n k l a2019 T S E p a i r D S C b o n e H & N T D p ∗ G AN ∗ d e f ± SS I M p < . < < . K l ag e s G R E p a i r R M S E H & N T ± G d D p a i r G AN ∗ r i g70 ± . ± . SS I M p - . ± . . ± . Q i T T S E c U - n e t ± . ± . D S C , D RR - . ± . . ± . H & N t D T D p a i r G AN ∗ d e f ± M E , D S C x - . ± . . ± . < . % b e a m P e n g2020 G R E D unp ± t i ss u e s . ± . . ± . < . % d e p t h H & N x D T D + G AN d e f ± M E p < ± . . ± . < . % N TC P T hu mm e r e r G R E p a i r D S C R S γ Thor B r e a s t t L t O . D G R E D p + G AN ∗ d e f ± N CC p < . . ± . D RR O l b e r g2019 m D i x o np a i r ± d i s b o n e M u l t i p l e s i t e s w i t h o n e n e t w o r k P r o s t a t e D T D p a i r G AN r i g 60 ± M E x - . ± . . ± . < % γ M a s p e r o2018 R ec t u m . G R E ± - . ± . . ± . C e r v i x . m D i x o n ± - . ± . a . ± . ∗ c o m p a r i s o n w i t h o t h e r a r c h i t ec t u r e h a s b ee np r o v i d e d γ % , mm = γ , γ % , mm = γ , γ % , mm = γ ; + t r a i n e d i n D o n m u l t i p l e v i e w a nd agg r e ga t e d a f t e r i n f e r e n ce t r o bu s t n e ss t o t r a i n i n g s i ze w a s i n v e s t i ga t e d c m u l t i p l ec o m b i n a t i o n s ( a l s o ± D i x o n r ec o n s t r u c t i o n , w h e r e p r e s e n t) o f t h e s e q u e n ce s w e r e i n v e s t i ga t e dbu t o m i tt e d ; m d a t a f r o mm u l t i p l ece n t e r s III. RESULTS III.A. MR-only radiotherapyeep learning-based sCT generation in RT and PET February 5, 2021 page 15 ium administration. Florkow et al. investigated how the performance of a 3D patch-basedpaired U-net was impacted by different combinations of T1 GRE images along with its Dixonreconstructions, finding that using multiple Dixon images is beneficial in the human and ca-nine pelvis. Qi et al. studied the impact of combining T1 ( ± Gd) and T2 TSE obtainingthat their 2D paired GAN model trained on multiple sequences outperformed any model ona single sequence.When focusing on the DL model configuration, we found that 2D models were the most pop-ular ones, followed by 3D patch-based and 2.5D models. Only one study adopted a multi-2D(m2D) configuration . Three studies also investigated whether the impact of combiningsCTs from multiple 2D models after inference (2D+) shows that 2D+ is beneficial comparedto single 2D view . When comparing the performances of 2D against 3D models,Fu et al. found that a modified 3D U-net outperformed a 2D U-net; while Neppl et al. one month later published that their 3D U-net under-performed a 2D U-net not only onimage similarity metrics but also considering photon and proton dose differences. Thesecontradicting results will be discussed later. Paired models were the most adopted, withonly ten studies investigating unpaired training . Interestingly, Li etal. compared a 2D U-net trained in a paired manner against a cycle-GAN trained in anunpaired manner, finding that image similarity was higher with the U-net. Similarly, twoother studies compared 2D paired against unpaired GANs achieving slightly better similar-ity and lower dose difference with paired training in the abdomen and H&N . Mixedpaired/unpaired training was proposed by Jin et al. who found such a technique benefi-cial against either paired or unpaired training. To improve unpaired training, Yang et al. found that structure-constrained loss functions and spectral normalisation ameliorated per-formances of unpaired training in the pelvic and abdominal regions.An interesting study on the impact of the directions of patch-based 2D slices, patch size andGAN architecture was conducted by Klages et al. who reported that 2D+ is beneficialagainst a single view (2D) training, overlapping/non-overlapping patches is not a crucialpoint, and that upon good registration training of paired GANs outperforms unpaired train-ing (cycle-GANs).If we now turn to the architectures employed, we can observe that GAN covers the majorityof the studies ( ∼ ∼ ∼ Last edited Date:February 5, 2021
III.A. MR-only radiotherapyage 16 Spadea MF & Maspero M et al al. showed that U-net and GANs could achieve similar image- and dose-base performances.Fetty et al. focused on comparing different generators of a 2D paired GAN against theperformance of an ensemble of models, finding that the ensemble was overall better thansingle models being more robust to generalisation on data from different scanners/centres.When considering CNNs architectures, it is worth mentioning using 2.5D dilated CNNs byDinkla et al. where the m2D training was claimed to increase the robustness of inferencein a 2D+ manner maintaining a big receptive field and a low number of weights.An exciting aspect investigated by four studies is the impact of the trainingsize , which will be further reviewed in the discussion section.Finally, when considering the metric performances, we found that 21 studies reportedonly image similarity metrics, and 30 also investigated the accuracy of sCT-based dosecalculation on photon RT (19), proton RT (8), or both (3). Two studies performed treatmentplanning, considering the contribution of magnetic fields , which is crucial for MR-guidedRT. Also, only four publications studied the robustness of sCT generation in a multicentricsetting .Overall, DL-based sCT resulted in DD on average <
1% and γ , mm GPR > . For each anatomical site, the metrics on image similarity and dose werenot always calculated consistently. Such aspect will be detailed in the next section. III . B . CBCT-to-CT generation CBCT-to-CT conversion via DL is the most recent CT synthesis application, with the firstpaper published in 2018 . Some of the works (5 out of 15) focused only on improvingCBCT image quality for better IGRT . The remaining 10 proved the validityof the transformation with dosimetric studies for photons , protons and forboth photons and protons .Only three studies investigated unpaired training ; in eleven cases, paired trainingwas implemented by matching the CBCT and ground truth CT by rigid or deformableregistration. In Eck et al. , however, CBCT and CT were not registered for the training III. RESULTS III.B. CBCT-to-CT generationeep learning-based sCT generation in RT and PET February 5, 2021 page 17 T a b l e : O v e r v i e w s C T m e t h o d s f o r a d a p t i v e r a d i o t h e r a p y w i t h C B C T . T u m o r P a t i e n t s D L m e t h o d R e g I m a g e - s i m il a r i t y P l a n D o s e R e f e r e n c e s i t e t r a i n v a l t e s t x - c o n f a r c h M A E P S N R SS I M o t h e r s DDD P R G P R D V H o t h e r s f o l d [ HU ][ d B ][ % ][ % ][ % ] Abd P a n c r e a s L o O D p p a i r G AN ∗ d e f . ± . . ± . . ± . N CC S NU x < G y L i u Thor T h o r a x D p a i r G AN d e f ± M E D S C H D t i s x . ± . . ± . < . γ E c k l B r a i n L o O D p G AN r i g13 ± . ± . N CC N o H a r m s Pelvis P e l v i s p a i r ± . ± . S NU P r o s t a t e x D p a i r U - n e t d e f . . S NUN o K i d a2018 R M S E P r o s t a t e D p a i r U - n e t ∗ d e f M E x > . . γ γ D P R L a nd r y p . > . D P R R S P r o s t a t e x D e n s unp G AN w r i g87 ± M E x p . ± . . ± . ± . < ± . % < % D P R γ D P R R S K u r z P r o s t a t e D p a i r G AN ∗ r i g SS I M d i ff R O I N o K i d a2019 P e l v i s D p a i r G AN d e f ± M E D S C H D t i s x . ± . . ± . < γ E c k l H & N D unp G AN ∗ d e f . ± . . ± . . ± . R M S E ph a n t o m x . ± . . ± . L i a n g2019 N a s o ph a r D p a i r U - n e tr i g6 - M E o r ga n s x . ± . . ± . < % L i H & N D p a i r U - n e t ∗ r i g18 . . . R M S E t i ss u e s N o C h e n H & N t . D p a i r U - n e tr i g49 . . . S N R N o Y u a n H & N x D + p a i r U - n e t d e f ± M E D S C S NU p - . ± . . ± . R S γ T hu mm e r e r H & N D p a i r G AN d e f . ± . M E t i ss u e s x . ± . < G y < % B a r a t e a u H & N D p a i r G AN d e f . ± . M E D S C H D t i s x . ± . . ± . < . γ E c k l M u l t i p l e s i t e s w i t h o n e n e t w o r k H & N L un g B r e a s t
15 15 158 8 810 10 102 D unp ∗ G AN ∗ r i g53 ±
12 83 ±
10 66 ± . ± . . ± . . ± . . ± . . ± . . ± . M E x . ± . . ± . . ± . . ± . ± ± < % γ M a s p e r o2020 P e l v i s H & N x . D p a i r G AN ∗ d e f ± ± . ± . . ± . x p < % R S Z h a n g2020 ∗ c o m p a r i s o n w i t h o t h e r a r c h i t ec t u r e h a s b ee np r o v i d e d ; d o s e p a ss r a t e ( D P R ) % o r γ % , mm = γ ; D P R % o r γ % , mm = γ ; D P R % o r γ % , mm = γ ; + t r a i n e d i n D o n m u l t i p l e v i e w a nd agg r e ga t e d a f t e r i n f e r e n ce ; w d i ff e r e n t n e t s w e r e t r a i n e d e t h e d i ff e r e n t o u t pu t s w e r e w e i g h t e d t oo b t a i nfin a l s CT t r o bu s t n e ss t o t r a i n i n g s i ze w a s i n v e s t i ga t e d ; Last edited Date:February 5, 2021
III.B. CBCT-to-CT generationage 18 Spadea MF & Maspero M et al phase, as the authors claimed the first fraction CBCT was geometrically close enoughto the planning CT for the network. Deformable registration was then performed for imagesimilarity analysing. In this work, the quality of contours propagated to sCT from CT wascompared to manual contours drawn on the CT to assess each step of the IGART workflow:image similarity, anatomical segmentation and dosimetric accuracy. The network, a 2D cycleGAN implemented on a vendor’s provided research software, was independently trained andtested on different sites, H&N, thorax and pelvis, leading to best results for the pelvic region.Other authors studied training a single network with different anatomical regions. InMaspero et al. , authors compared the performances of three cycle-GANs trained indepen-dently on three anatomical sites (H&N, breast and lung) vs a single trained with all theanatomical sites together finding similar results in terms of image similarity.Zhang et al. trained a 2.5D conditional GAN with feature matching on a large cohort of135 pelvic patients. Then, they tested the network on additional 15 pelvic patients acquiredwith a different CT scanner and ten H&N patients. The network predicted sCT with similarMAE for both testing groups, demonstrating the potentialities to transfer pre-trained mod-els to different anatomical regions. They also compared different GAN flavours and U-netfinding the latter statistically worse than any GAN configuration.Three works tested unpaired training with cycle-GANs . In particular, Liang et al. compared unsupervised training among cycle-GAN, DCGAN and PGGAN on the samedataset, finding the first to perform better both in terms of image similarity and dose agree-ment.As it regards the body region, most of the studies were focused on H&N and pelvicregion. Liu et al. investigated CBCT-to-CT in the framework of breath hold stereotacticpancreatic radiotherapy, where they trained a 3D patch cycle-GAN introducing an attentiongate (AG) to deal with moving organs. They found that the cycle-GAN with AG per-formed better then U-net and cycle-GAN without AG. Moreover, the DL approach led toa statistically significant improvement of the replanning on sCT vs. CBCT although someresidual discrepancies were still present for this particular anatomical site. III. RESULTS III.B. CBCT-to-CT generationeep learning-based sCT generation in RT and PET February 5, 2021 page 19
III . C . PET attenuation correction DL methods for deriving sCT for PET AC have been published since 2017 . Two possibleimage translations are available in this category: i) MR-to-CT for MR attenuation correction(MRAC) where 14 papers were found; ii) uncorrected PET-to-CT, with three publishedarticles.In the first case, most methods have been tested with paired data in H&N (9 papers)and the pelvic region (4 papers) except Baydoun et al. who investigated the thorax dis-trict. The number of patients used for training ranged between 10 and 60. Most of the MRimages employed in these studies have been acquired directly through 3T PET/MRI hybridscanners, where specific MR sequences, such as UTE (ultra-short echo time) and ZTE (zerotime echo) are used to enhance short T tissues, such as in the cortical bone and Dixonreconstruction is employed to derive fat and water images.Leynes et al. compared the Dixon-based sCT vs sCT predicted by U-net receiving bothDixon and ZTE. Results showed that DL prediction reduced the RMSE in corrected PETSUV by a factor 4 for bone lesions and 1.5 for soft tissue lesions. Following this first work,other authors showed the improvement of DL-based AC over the traditional atlas-basedMRAC proposed by the vendors , also comparing several network config-urations .Torrado et al. pre-trained their U-net on 19 healthy brains acquired with T GRE MRIand, subsequently, they trained the network using Dixon images of colorectal and prostatecancer patients. They showed that pre-training led to faster training with a slightly smallerresidual error than U-net weights’ random initialisation.Pozaruk et al. proposed data augmentation, over 18 prostate cancer patients, by perturb-ing the deformation field used to match the MR/CT pair for feeding the network. Theycompared the performance of GAN with augmentation vs 1) Dixon based and 2) Dixon +bone segmentation from the vendor, 3) U-net with and 4) without augmentation. Theyfound significant differences between the 3 DL methods and classic MRAC routines. GANwith augmentation performed slightly better than the U-net with/without augmentation,although the differences were not statistically relevant.Gong et al. used unregistered MR/CT pair for a 3D patch cycle GAN, comparing theresults vs atlas-based MRAC and CNN with registered pair. Both DL methods performed
Last edited Date:February 5, 2021
III.C. PET attenuation correctionage 20 Spadea MF & Maspero M et al T a b l e : O v e r v i e w m e t h o d s o n s C T f o r P E T A C . R e g i o n P a t i e n t s M R I D L m e t h o d R e g I m a g e - s i m il a r i t y P E T - r e l a t e d O t h e r s R e f e r e n c e t r a i n v a l t e s t x - fi e l d c o n t r a s t c o n f a r c h M A E D S C t r a c e r P E T e rr f o l d [ T ][ HU ][ % ] P e l v i s H D i x o n ± Z T E D pp a i r U - n e t d e f F - F D G G a - P S M A R M S E S UV d i ff L e y n e s P e l v i s H T G R E p D i x o n D p a i r U - n e t d e f F - F D G . ± . . ± . f . ± . s . ± . b µ - m a pd i ff T o rr a d o2019 P e l v i s H T G R E c T T S E D pp a i r C NN d e f . ± . s . ± . a . ± . f . ± . w . ± . s F - F D G R M S E B r a d s h a w P r o s t a t e H D i x o n D p a i r G AN ∗ d e f G a - P S M A . ± . m a x . ± . m e a SS I M µ - m a pd i ff P o z a r u k H e a d p e t . T G R E G d D p a i r C NN d e f . ± . a . ± . s . ± . b n . a . - . ± . p e t L i u H e a d p + . p + H U T E D p a i r U - n e t d e f . ± . a . ± . s . ± . b F - F D G < J a n g2018 H & N H D i x o n D p a i r U - n e tr i g13 . ± . . ± . b F - F D G < G o n g2018 ± Z T E . ± . . ± . b H e a d H m D i x o n D p U - n e tr i g . ± . j F - F E T b i o l t u m o r L a d e f og e d p a e d + U T E p a i r v o l, S UVH e a d T G R E D pp a i r G AN d e f ± ± b ± a ± s . ± . b F - F D G . ± . . ± . b . ± . s . ± . a r e l v o l d i f s u r f d i s t M E R M S EP S N R SS I M S UV A r a b i H e a d . T G R E . D p a i r U - n e tr i g C - W AY C - D A S B - . ± . - . ± . s y n t µ - m a p , k i n a n a l Spuh l e r H e a d H Z T E D pp a i r U - n e t d e f . ± . b F - F D G - . ± . J a c B l a n c - D u r a nd H e a d H D i x o n c D pp a i r G AN ∗ d e f . ± . % . ± . b F - F D G - . ± S UV G o n g2020a H e a d m D i x o n U T E c . D p a i r U - n e tr i g10 . ± . % . ± . b C - P i B F - M K < G o n g2020 b T h o r a x L o O H D i x o n c D p a i r G AN ∗ d e f . ± . F - N a F P S N R SS I M R M S E B a y d o un O t h e r t h a n M R - b a s e d s C T B o d y PE T , n oa tt c o rr ec t e d D p a i r U - n e t Y i ± . ± . b F - F D G - . ± . b s e rr L i u B o d y PE T , n oa tt c o rr ec t e d D pp a i r G ANY i ± . ± . b F - F D G < . N CC P S N R M E D o n g2019 B o d y PE T , n oa tt c o rr ec t e d . D p a i r G ANY i F - F D G - . ± . S UV M E A r m a n i o u s ∗ c o m p a r i s o n w i t h o t h e r a r c h i t ec t u r e h a s b ee np r o v i d e d ; p d a t a f r o m a n o t h e r M R I s e q u e n ce u s e d a s p r e - t r a i n i n g ; p r p a t i e n t s a c q u i r e d w i t hd i ff e r e n t s c a nn e r ; H M R I d a t a f r o m h y b r i d PE T / M R I s c a nn e r ; m a x i nS UV m a x ; m e a i nS UV m e a n ; a i n a i r o r b o w e l ga s ; b i n t h e b o n y s t r u c t u r e s ; s i n t h e s o f tt i ss u e ; f i n t h e f a tt y t i ss u e ; w i n w a t e r ; t r a i n e d t o s e g m e n tt h e CT / s CT i n t o c l a ss e s ; j e x p r e ss e d i n t e r m s o f J a cc a r d i nd e x a ndn o t D S C ; c m u l t i p l ec o m b i n a t i o n s ( a l s o ± D i x o n r ec o n s t r u c t i o n , w h e r e p r e s e n t) o f t h e s e q u e n ce s w e r e i n v e s t i ga t e dbu t o m i tt e d ; i i n t r i n s i c a ll y r e g i s t e r e d : PE T - CT d a t a ; III. RESULTS III.C. PET attenuation correctioneep learning-based sCT generation in RT and PET February 5, 2021 page 21 better than atlas MRAC in DSC, MAE and
P ET err , no significant difference was foundbetween CNN and cycle-GAN. They concluded that cycle-GAN has the potentiality to skipthe limit of using a perfectly aligned dataset for training. However, it requires more inputdata to improve output.Baydoun et al. tried different network configurations (VGG16 , VGG19 , andResNet ) as a benchmark with a 2D conditional GAN receiving either two Dixon in-put (water and fat) or four (water, fat, in-phase and opposed-phase). The GAN alwaysperformed better than VGG19 and ResNet, with more accurate results obtained with fourinputs.In the effort to reduce the time for image acquisition and patient discomfort, someauthors proposed to obtain the sCT directly from diagnostic images, T or T -weighted bothusing images from standalone MRI scanners or hybrid machines . In particular,Bradshaw et al. trained a combination of three CNNs with T GRE and T TSE MRI(single sequence or both) to derive an sCT stratified in classes (air, water, fat and bone),which was compared the with the scanner default MRAC output. The RMSE on PETreconstruction computed on SUV and was significantly lower with the deep learning methodand T / T input. However recently, Gong et al. tested on a brain patient cohort a CNNwith either T or Dixon and multiple echo UTE (mUTE) as input. The latter over-performedthe others. Liu et al. trained a CNN to predict CT tissue classes from diagnostic 1.5 T T GRE of 30 patients. They tested on 10 independent patients of the same cohort, whoseresults are reported in table 5 in terms of DSC. Then, they predicted sCT for 5 patientsacquired prospectively with a 3T MRI/PET scanner ( T GRE) and they computed the
P ET err , resulting < P ET err is below 1% on average, demonstrating the validity of theapproach for the scope of PET AC.
Last edited Date:February 5, 2021
III.C. PET attenuation correctionage 22 Spadea MF & Maspero M et al IV . Discussion This review encompassed DL-based approaches to generate sCT from other radiotherapyimaging modalities, focusing on published journal articles. The research topic was earlierintroduced at conferences from 2016 . Since 2016, we have observed increasing interest inusing DL for sCT generation. DL methods’ success is probably related to the growth ofavailable computational resources in the last decade, which allowed training large volumedatasets thus achieving fast image translation (i.e. in the order of few seconds ) makingpossible to apply DL in clinical cases and demonstrate its feasibility for clinical scenarios.In this review, we considered three clinical purposes for deriving sCT from other imagemodality, which are discussed in the following: I MR-only RT.
The generation of sCT for MR-only RT with DL is the most populatedcategory. Its 52 papers demonstrate the potential of using DL for sCT generationfrom MRI. Several training techniques and configurations have been proposed. Foranatomical regions, as pelvis and brain/H&N, high image similarity and dosimetricaccuracy can be achieved for photon RT and proton therapy. In region strongly affectedby motion , e.g. abdomen and thorax, the first feasibility studies seem to bepromising . However, no study proposed the generation of DL-based 4DsCT yet, as from non deep learning-based methods . An exciting application isthe DL-based sCT generation for the paediatric population, who is considered moreradiation-sensitive than an adult population and could enormously benefit from MR-only, especially in the case of repeated simulations . The methods for sCT generationfor brain and abdominal cases achieved encouraging photon and proton RT results.The geometric accuracy of sCT needs to be thoroughly tested to enable the clinicaladaption of sCT for treatment planning purposes, especially when MRI or sCT areused to substitute CT for position verification purposes. So far, the number of studiesthat focused investigated such an aspect from DL-based sCT is still scarce. OnlyGupta et al. for brain and Olberg et al. for breast cancer have investigated thisaspect assessing the accuracy of alignment based on CBCT and digitally reconstructedradiography, respectively. Future studies are required to strengthen the clinical use ofsCT. MR-only RT can potentially allow for daily image guidance and plan adaptionin the context of MR-guided radiotherapy , where the accuracy of dose calculation in IV. DISCUSSIONeep learning-based sCT generation in RT and PET February 5, 2021 page 23 the presence of the magnetic field needs to be assessed before clinical implementation.So far, the studies investigating this aspect are still few, e.g. for abdominal andpelvic tumours and only considered low magnetic fields. The results are promising,but we advocate for further studies on additional anatomical sites and magnetic fieldstrengths. II CBCT-to-CT for image-guided (adaptive) radiotherapy.
In-room CBCT imag-ing is widespread in photon and proton radiation therapy for daily patient setup .However, CBCT is not commonly exploited for daily plan adaption and dose recalcu-lation due to the artefacts associated with scatter and reconstruction algorithms thataffect the quality of the electron density predicted by CBCT . Traditional methodsto cope with this issue have been based on image registration , on scatter cor-rection , look-up-table to rescale HU intensities and histogram matching . Theintroduction of DL for converting CBCT to sCT has substantially improved imagequality leading to faster results than image registration and analytical corrections .Speed is a crucial aspect for the translation of the method into the clinical routine.However, one of the problems arising in CBCT-to-CT conversion for clinical applica-tion, is the different field of view (FOV) between CBCT and CT. Usually, the trainingis performed by registering, cropping and resampling the volume to the CBCT size,which is smaller than the planning CT.Nonetheless, for replanning purposes, the limited FOV may not transfer the plan tothe sCT. When this is the case, some authors have proposed to assign water equivalentdensity within the CT body contour for the missing information . In other cases,the sCT patch has been stitched to the planning CT to cover the entire dose volume .Ideally, appropriate FOV coverage should be employed when transferring the plan foronline adaptive RT. Beside the dosimetric aspect, improved image quality leads tomore accurate image guidance for patient set-up and OAR segmentation, all necessarysteps for online adaptive radiotherapy especially for anatomical sites prone to largemovements, as speculated by Liu et el. in the framework of pancreatic treatments.CBCT-to-CT has been proved both for photon and proton radiotherapy, where thesetup accuracy and dose calculation are even more relevant to avoid range shift errorsthat could jeopardise the benefit of treatment . Because there is an intrinsic errorin converting HU to relative proton stopping power , it has been shown that deep Last edited Date:February 5, 2021 age 24 Spadea MF & Maspero M et al learning methods can translate CBCT directly to stopping power . This approachhas not been covered in this review, but it is an interesting approach that will probablylead to further investigations.
III PET attenuation correction.
The sCT in this category is obtained either fromMR or from uncorrected PET. In the first case, the work’s motivation is to overcomethe current limitations in generating attenuation maps ( µ -maps) from MR images inMRI/PET hybrid acquisitions, where the bone contribution is miscalculated . In thesecond case, the limits to overcome are different: i) to avoid extra-radiation dose whenthe sole PET exam is required, ii) to avoid misregistration errors when standalone CTand PET machines are used, iii) to be independent of the MR contrast in MRI/PETacquisitions. Besides the network configuration, MRI used for the input, or the numberof patients included in the studies, DL-based sCT have always outperformed currentMRAC methods available on commercial software. The results of this review supportthe idea that DL-based sCT will substitute current AC methods, being also able toovercome most of the limitations mentioned above. These aspects seem to contradictthe stable number of papers in this category in the last three years. Nonetheless,we have to consider that the recent trend has been to directly derive the µ -map fromuncorrected PET via DL. Because this review considered only image-to-CT translation,these works were not included but can be found in a recent review by Lee . However,it is worth to mention a recent study from Shiri et al. , where the largest patientcohort ever (1150 patients split in 900 for training, 100 for validation and 150 for test)was used for the scope. Direct µ -map prediction via DL is an auspicious opportunitywhich may direct future research efforts in this context. Deep learning considerations and trends
The number of patients used for training the networks is quite variable, ranging from aminimum of 7 (in I) to a maximum of 205 (in II) , and 242 (in I). In most of the cases,the patient number is limited to the availability of training pairs. In the form of linearand non-linear transformation , data augmentation is performed to increase the trainingaccuracy as demonstrated in Pozaruk et al. . However, few publications investigated theimpact of increasing the training size , finding that image similarity increaseswhen training up to fifty patients. This provides some indications on the minimum amount IV. DISCUSSIONeep learning-based sCT generation in RT and PET February 5, 2021 page 25 of patients necessary to include in the training to achieve the state of the art performances.The optimal patient number may also depend on the anatomical site and its inter- andintra-fraction variability. Besides, attention should be dedicated to balancing the trainingset, as performed in . Otherwise, the network may overfit, as previously demonstratedfor segmentation tasks .GANs were the most popular architecture, but we cannot conclude that it is the bestnetwork scheme for sCT. Indeed, some studies compared U-net or other CNN vs GAN findingGAN performing statistically better ; others found similar results or even worseperformances . We can speculate that, as demonstrated by , a vital role is playedby the loss function which, despite being the effective driver for network learning, has beeninvestigated less than the network architecture, as highlighted for image restoration . An-other important aspect is the growing trend, except for category III, in unpaired training (5and 7 papers in 2019 and 2020, respectively). The quality of the registration when train-ing in a paired manner influences the quality of deep learning-based sCT generation . Inthis sense, unpaired training offers an option to alleviate the need of well-matched trainingpairs. When comparing paired vs unpaired training, we observed that paired training leadto slightly better performances. However, the differences were not always statistically sig-nificant . As proposed by Yang et al. , unsupervised training decreases the semanticinformation in going from one domain to an other . Such an issue may be solved introducinga structure-consistency loss, which extracts structural features from the image defining theloss in the feature space. Yang et al.’s results showed improvements in this sense relative toother unsupervised methods. They also showed that pre-registering unpaired MR-CT fur-ther improves the results of unsupervised training, which can be an option when input andtarget images are available, but perfect alignment is not achievable. In some cases, unpairedtraining even demonstrated to be superior to paired training . A trend lately emerged isthe use of architecture initially thought for unpaired training, e.g. cycle-GAN to be used forpaired training .Focusing on the body sites, we observed that most of the investigations were conducted inthe brain, H&N and the pelvic regions while fewer studies are available for the thorax and theabdomen, representing a more challenging patient population due to the organ motion .In the results of MR-only RT, we found contradicting results regarding the best per-forming spatial configuration for the papers that directly compared 2D vs 3D training . Last edited Date:February 5, 2021 age 26 Spadea MF & Maspero M et al
It is certainly clear that 2D+ increases the sCT quality compared to single 2D views ;however, when comparing 2D against 3D training the patch size is an important aspect .3D deep networks require a larger number of training parameters than 2D networks andfor sCT generation, the approaches adopted have chosen to use patch size much smallerthan the whole volume, probably hindering the contextual information considered. Gener-ally, downsampling approaches have been proposed to increase the perceptive field of thenetwork, e.g. for segmentation tasks , but they have not been applied to sCT generation.We believe this will be an interesting area of research.For what concerns the latest development from the deep learning perspective, in 2018,Oktay et al. proposed a new mechanism, called attention gate (AG), to focus on targetstructures that can vary in shape and size. Liu et al. incorporated the AG in the generatorof a cycle-GAN to learn organ variation from CBCT-CT pairs in the context of pancreasadaptive RT, showing that its contribution significantly improved the prediction comparedto the same network without AG. Other papers also adopted attention . Embedding hasalso been proposed to increase the expressivity of the network and applied by Xiang et al. (I). As AG’s mechanism is a way to focus the attention on specific portions of the image, itcan potentially open the path for new research topics. In 2019, Schlemper and colleagues evaluated the AG for different tasks in medical image processing: classification, object de-tection, segmentation. So, we can envision that in the online IGART such a mechanismcould lead to multi-task applications, such as deriving sCT, while delineating the structureof interests. Benefits and challenges for clinical implementations
Deep learning-based sCT generations may reduce the need of additional or non-standardMRI sequences, e.g. UTE or ZTE, which could lead to shorten the total acquisition timeand speed-up the workflow or increase patient throughput. As already mentioned, speed isparticularly interesting for MR-guided RT, but for adaptive RT in II is considered crucialtoo. For what concern categories II and III, the generation of DL-based sCT possibly enablesdose reduction during imaging by reducing the need for CT in case of anatomical changes(in II) or by possibly reducing the amount of radioactive material injected (in III).Finally, it is worth commenting on the current status of the clinical adoption of DL-basedsCT. We could not find that any of the methods considered are now clinically implemented
IV. DISCUSSIONeep learning-based sCT generation in RT and PET February 5, 2021 page 27 and used. We speculate that this is probably related to the fact that the field is still relativelyyoung, with the first publications only from 2017 and that time for clinical implementationsgenerally last years, if not decades . Additionally, as already mentioned, for categoriesI/II the impact of sCT for position verification still needs to be thoroughly investigated.Also, the implementation may be more comfortable for category III if the methods would bedirectly integrated into the scanner by the vendor. In general, the involvement of vendorsmay streamline the clinical adoption of DL-based sCT. In this sense, we can report thatvendors are currently active validating their methods in research settings, e.g. for brain ,pelvis in I, and for H&N, thorax and pelvis in II . In the last month, Palmer et al. also reported using a pre-released version of a DL-based sCT generation approach for H&Nin MR-only RT. Another essential aspect that needs to be satisfied is the compliance to thecurrently adopted regulations , where vendors can offer a vital support .A key aspect of clinical implementation is the precise definition of the requirementsthat a DL-based solution needs to satisfy before being accepted. If we consider the reportedmetrics, we cannot find uniform criteria on what and how to report. Multiple metrics havebeen defined, and it is not clear on which region of interests they should be computed.For example, the image-based similarity was reported on the body contour, or in tissuesgenerally defined by different thresholds; for task-specific metrics the methods employed areeven more heterogeneous. For example, in I and II, gamma pass rates can be performedin 2D, 3D and different dose thresholds level have been employed, e.g. 10%, 30%, 50% or90% of the prescribed or the maximum dose. In III the P ET err can be computed eitheron the either SUV, max SUV or in larger VOI making difficult to compare performancesof different network configurations. We think that this lack of standardisation in reportingthe results is also detrimental for clinical adoption. A first attempt on revising the metricscurrently adopted has been performed by Liesbeth et al. . However, this is still insufficient,considering the differences in how such metrics can be calculated and reported. In thissense, we advocate for consensus-based requirements that may facilitate reporting in futureclinical trials . Also, no public datasets arranged in the form of grand challenges ( https://grand-challenge.org/ ) are available to enable a fair and open evaluation of differentapproaches.To date, four scientific studies have already investigated the performance of sCT in a multi-centre setting . These studies have been reported only for MR-only RT. Future
Last edited Date:February 5, 2021 age 28 Spadea MF & Maspero M et al work should focus on assessing the performance of DL-based sCT generation for II and III.The quality of sCT cannot be judged by a user, except when its quality is inferior. Therefore,software-based quality assurance (QA) procedures should be put in place. It could be quiteinteresting to have a phantom to allow regular QA procedures, such as the case of theCT QA . This would be relatively straightforward for II; however, in MR-based sCT,the manufacturing of phantoms is quite challenging due the need of contrast in both MRIand CT images. Recently, the first phantoms have been proposed for such task showing the potential of additive manufacturing.Alternatively, it would be quite interesting if a CNN could automatically generate a met-ric to assess the quality of sCTs, as, for example, already proposed for automatic segmenta-tion . In this sense, Bragman et al. proposed using uncertainty for such a task adoptinga multi-task network and a Bayesian probabilistic framework. More recently, two other worksproposed to use uncertainty either from the combination of independently trained networks or via dropout-based variational inference . So far, the field of uncertainty estimation withdeep learning has been just superficially touched for sCT generation. It would be inter-esting to see future work focusing on developing criteria for the automatic identification offailure cases using uncertainty prediction. Patients with inaccurate synthetic CTs will beflagged for CT rescan, or manual adjustment of the sCT if deemed feasible. Beyond sCT for radiotherapy
During the database search, we found other possible applications of DL-based image gen-eration, which are beyond the categories mentioned so far or the radiotherapy application.For example, Kawahara et al. proposed to generate synthetic dual-energy CT from CTto assess the body material composition using 2D paired GANs. Also, commercial solutionsstart to be evaluated for the generation of DL-based sCT from MRI for lesion detectionof suspected sacroiliitis or to facilitate surgical planning of the spine . An interestingapplication is also the generation of sCT to facilitate multi-modal image registration, asproposed by Mckenzie et al. .Additionally, the methods here reviewed to generate sCT can be applied to translatingother image modalities. Interesting examples in the RT realm are provided by Jiang et al. who investigated using MRI-to-CT translation to increase the robustness of segmentation,and Kieselmann et al. who generated synthetic MRI from CT to enable training of seg-
IV. DISCUSSIONeep learning-based sCT generation in RT and PET February 5, 2021 page 29 mentation networks that would exploit the wealth of delineation on another modality. Adetailed review of other image-to-image translation applications in radiotherapy has beenrecently compiled by Wang et al. . V . Conclusion The deep learning-based generation of sCT has been reviewed to: I) replace CT in MR-based treatment planning, II) facilitate CBCT-based adaptive radiotherapy, and III) correctattenuation maps in PET. A detailed review of each category was presented, providing acomprehensive comparison among DL-based methods in terms of the most popular metricsreported. The essential contributions were highlighted identifying specific challenges. Wefound that DL-based sCT generation is an active an growing area of research. For severalanatomical sites, e.g. H&N/brain and pelvis, sCT seems to be feasible with deep learning.While deep learning-based sCT generation techniques are up-and-upcoming, comprehensivecommissioning and QA of deep learning-based sCT technique are critical prior and essentialduring clinical deployment to ensure patient safety. VI . Acknowledgements Matteo Maspero is grateful to prof.dr.ir. Cornelis (Nico) A.T. van den Berg, head of theComputational Imaging Group for MR diagnostics & therapy, Center for Image Sciences,UMC Utrecht, the Netherlands for the general support provided during this manuscript’scompilation.
VII . Conflict of interest
None of the authors has conflict of interests to disclose.
Last edited Date:February 5, 2021 age 30 Spadea MF & Maspero M et al
Appendix
The query used in selected databases - PubMed, Scopus and Web of Science - in the fields(Title/Abstract/Keywords) was the following (Figure 4):((”radiotherapy”) OR (”radiation therapy”) OR (”proton therapy”) OR (”oncology”)OR (”imaging”) OR (”radiology”) OR (”healthcare”) OR (”CBCT”) OR (”cone-beam CT”)OR (”PET”) OR (”attenuation correction”) OR (”attenuation map”)) AND ((”syntheticCT”) OR (”syntheticCT”) OR (”synthetic-CT”) OR (”pseudo CT”) OR (”pseudoCT”)OR (”pseudo-CT”) OR (”virtual CT”) OR (”virtualCT”) OR (”virtual-CT”) OR (”derivedCT”) OR (”derivedCT”) OR (”derived-CT”) OR (sCT)) AND ((”deep learning”) OR (”con-volutional network”) OR (”CNN”) OR (”GAN”) OR (”GANN”) OR (artificial intelligence));
ANDAND > 2014 radiotherapy OR radiation therapy OR proton therapyoncology OR imaging OR radiology OR healthcare OR CBCT OR Cone Beam CT OR PET OR attenuation correction ORattenuation map synthetic CT OR syntheticCT OR synthetic-CT ORpseudo CT OR pseudoCT OR pseudo-CT ORvirtual CT OR virtualCT OR virtual-CT OR derived CT OR derivedCT OR derived-CT OR sCTdeep learning OR convolutional network ORCNN ORGAN OR GANN ORartificial intelligenceAND brachytherapygeneral medicine purposesdirect attenuation map generationbasic machine learning I) MRI-only RTII) CBCT to sCTfor adaptive RTIII) MRI/CT tosCT for PET AC
Inclusion criteria Exclusion criteria Output
Time window Keywords Content
Journal article
Article type Category
AND NOT
Figure 4:
Schematic of the search inclusion/exclusion criteria adopted for this reviewselecting the time window, keywords, type of article, content and the three categories defined.
VII. CONFLICT OF INTERESTeep learning-based sCT generation in RT and PET February 5, 2021 page 31
VIII . Acronyms and abbreviations µ − map : attenuation map; AC : attenuation correction; aff : affine; AG : attention gate; CBCT :cone-beam computed tomography; CC : cross-correlation; CNNs : convolutional neural networks;cor: coronal; CT : computed tomography; cycle-GAN : cycle-consistent GAN; DD : dose differ-ence; def : deformable; DL : deep learning; DPR : dose pass rate;
DSC : dice similarity coefficient;
DVH : dose-volume histogram; ens: ensemble;
FLAIR : fluid-attenuated inversion recovery;
FOV :field of view;
GANs : generative adversarial networks; Gd : Gadolinium; GPR : gamma pass rate;
GRE : gradient recalled-echo;
H&N : head and neck;
IGART : image-guided adaptive radiationtherapy;
MAE : mean absolute error; MR : magnetic resonance; MRAC : MR attenuation correc-tion;
MRI : magnetic resonance imaging;
MSE : mean squared error; mUTE : multiple echo UTE;
NCC : normalised cross-correlation;
OAR : organ-at-risk; p : proton; paed : paediatric; PET :positron emission tomography;
PET | err | : absolute error PET reconstruction; PET err : relative er-ror PET reconstruction;
PSNR : peak signal-to-noise ratio; rig : rigid;
RMSE : root mean squarederror;
ROI : region-of-interest; RS : range shift; RT : radiotherapy; sag : sagittal; sCT : syntheticcomputed tomography; SSIM : structural similarity index;
SUV : standard uptake values; tra :transverse;
TSE : turbo spin-echo;
UTE : ultra-short echo time;
VOI : volume of interest; x : pho-ton; References J. Husband, R. H. Reznek, and J. E. Husband,
Imaging in oncology , CRC Press, 2016. L. Beaton, S. Bandula, M. N. Gaze, and R. A. Sharma, How rapid advances in imagingare defining the future of precision radiation oncology, Br J Cancer , 779–790(2019). D. Verellen, M. De Ridder, N. Linthout, K. Tournel, G. Soete, and G. Storme, Innova-tions in image-guided radiotherapy, Nat Rev Canc , 949–960 (2007). D. A. Jaffray, Image-guided radiotherapy: from current concept to future perspectives,Nat Rev Clin Oncol , 688 (2012). J. Seco and M. F. Spadea, Imaging in particle therapy: state of the art and futureperspective, Acta Oncol , 1254–1258 (2015). Last edited Date:February 5, 2021 age 32 Spadea MF & Maspero M et al IAEA,
Radiotherapy in Cancer Care: Facing the Global Challenge , Non-serial Publica-tions, INTERNATIONAL ATOMIC ENERGY AGENCY, Vienna, 2017. J. Seco and P. M. Evans, Assessing the effect of electron density in photon dose calcu-lations, Medical Physics , 540–552 (2006). M. Unterrainer et al., Recent advances of PET imaging in clinical Radiat Oncol, RadiatOncol , 1:15 (2020). P. Dirix, K. Haustermans, and V. Vandecaveye, The value of magnetic resonanceimaging for radiotherapy planning, , 151–159 (2014). M. A. Schmidt and G. S. Payne, Radiotherapy planning using MRI, Phys Med Biol , R323 (2015). S. Devic, MRI simulation for radiotherapy treatment planning., Med Phys , 6701(2012). T. Nyholm, M. Nyberg, M. G. Karlsson, and M. Karlsson, Systematisation of spatialuncertainties for comparison between a MR and a CT-based radiotherapy workflow forprostate treatments, Radiat Oncol , 1–9 (2009). K. Ulin, M. M. Urie, and J. M. Cherlow, Results of a multi-institutional benchmarktest for cranial CT/MR image registration, Int J Radiat Oncol Biol Phys , 1584–1589(2010). B. A. Fraass, D. L. McShan, R. F. Diaz, R. K. Ten Haken, A. Aisen, S. Gebarski,G. Glazer, and A. S. Lichter, Integration of magnetic resonance imaging into radiationtherapy treatment planning: i. technical considerations, Int J Radiat Oncol Biol Phys , 1897–908 (1987). Y. K. Lee, M. Bollet, G. Charles-Edwards, M. A. Flower, M. O. Leach, H. McNair,E. Moore, C. Rowbottom, and S. Webb, Radiotherapy treatment planning of prostatecancer using magnetic resonance imaging alone, Radiother Oncol , 203–216 (2003). T. Nyholm and J. Jonsson, Counterpoint: Opportunities and Challenges of a MagneticResonance Imaging-Only Radiotherapy Work Flow, Semin Radiat Oncol , 175–80(2014). eep learning-based sCT generation in RT and PET February 5, 2021 page 33 M. Kapanen, J. Collan, A. Beule, T. Sepp¨al¨a, K. Saarilahti, and M. Tenhunen, Commis-sioning of MRI-only based treatment planning procedure for external beam radiotherapyof prostate, Magn Reson Med , 127–35 (2013). A. M. Owrangi, P. B. Greer, and C. K. Glide-Hurst, MRI-only treatment planning:benefits and challenges, Phys Med Biol , 05TR01 (2018). M. Karlsson, M. G. Karlsson, T. Nyholm, C. Amies, and B. Zackrisson, DedicatedMagnetic Resonance Imaging in the Radiotherapy Clinic, Int. J. Radiat. Oncol. Biol.Phys. , 644–51 (2009). J. J. Lagendijk, B. W. Raaymakers, C. A. Van den Berg, M. A. Moerland, M. E.Philippens, and M. Van Vulpen, MR guidance in radiotherapy, Phys Med Biol ,R349 (2014). J. H. Jonsson, M. G. Karlsson, M. Karlsson, and T. Nyholm, Treatment planning usingMRI data: an analysis of the dose calculation accuracy for different treatment regions,Radiat Oncol , 62 (2010). J. M. Edmund and T. Nyholm, A review of substitute CT generation for MRI-onlyradiation therapy, Radiat Oncol (2017). E. Johnstone, J. J. Wyatt, A. M. Henry, S. C. Short, D. Sebag-Montefiore, L. Murray,C. G. Kelly, H. M. McCallum, and R. Speight, Systematic Review of Synthetic Com-puted Tomography Generation Methodologies for Use in Magnetic Resonance Imaging-Only Radiation Therapy, Int J Radiat Oncol Biol Phys , 199–217 (2018). B. Wafa and A. Moussaoui, A review on methods to estimate a CT from MRI data inthe context of MRI-alone RT, Med Tech J , 150–178 (2018). L. Kerkmeijer, M. Maspero, G. Meijer, J. van der Voort van Zyp, H. de Boer, andC. van den Berg, Magnetic Resonance Imaging only Workflow for Radiotherapy Simu-lation and Planning in Prostate Cancer, Clinic Oncol , 692–701 (2018). D. Bird, A. M. Henry, D. Sebag-Montefiore, D. L. Buckley, B. Al-Qaisieh, andR. Speight, A Systematic Review of the Clinical Implementation of Pelvic Magnetic
Last edited Date:February 5, 2021 age 34 Spadea MF & Maspero M et al
Resonance Imaging-Only Planning for External Beam Radiation Therapy, Int J RadiatOncol Biol Phys , 479–492 (2019). A. Hoffmann, B. Oborn, M. Moteabbed, S. Yan, T. Bortfeld, A. Knopf, H. Fuchs,D. Georg, J. Seco, M. F. Spadea, O. J¨akel, C. Kurz, and K. Parodi, MR-guided protontherapy: a review and a preview, Radiat Oncol (2020). V. T. Taasti, P. Klages, K. Parodi, and L. P. Muren, Developments in deep learningbased corrections of cone beam computed tomography to enable dose calculations foradaptive radiotherapy, Physics and Imaging in Radiat Oncol , 77–79 (2020). L. Zhu, J. Wang, and L. Xing, Noise suppression in scatter correction for cone-beamCT, Med Phys , 741–752 (2009b). L. Zhu, Y. Xie, J. Wang, and L. Xing, Scatter correction for cone-beam CT in radiationtherapy, Med Phys , 2258–2268 (2009c). A. Mehranian, H. Arabi, and H. Zaidi, Vision 20/20: magnetic resonance imaging-guided attenuation correction in PET/MRI: challenges, solutions, and opportunities,Med Phys , 1130–1155 (2016). I. Mecheter, L. Alic, M. Abbod, A. Amira, and J. Ji, MR Image-Based Attenuation Cor-rection of Brain PET Imaging: Review of Literature on Machine Learning Approachesfor Segmentation, Journal of Digital Imaging , 1–18 (2020). C. Catana, Attenuation correction for human PET/MRI studies, Phys Med Biol ,TR02 (2020). Y. LeCun, Y. Bengio, and G. Hinton, Deep learning, Nature , 436–444 (2015). I. Goodfellow, Y. Bengio, A. Courville, and Y. Bengio,
Deep learning , Number 2 inAdaptive Computation and Machine Learning, MIT press Cambridge, 2016. P. Meyer, V. Noblet, C. Mazzara, and A. Lallement, Survey on deep learning forradiotherapy, Comp Biol Med , 126–146 (2018). B. Sahiner, A. Pezeshk, L. M. Hadjiiski, X. Wang, K. Drukker, K. H. Cha, R. M.Summers, and M. L. Giger, Deep learning in medical imaging and radiation therapy,Med Phys , e1–e36 (2018). eep learning-based sCT generation in RT and PET February 5, 2021 page 35 I. Boon, T. A. Yong, and C. Boon, Assessing the Role of Artificial Intelligence (AI)in Clinical Oncology: Utility of Machine Learning in Radiotherapy Target VolumeDelineation, Medicines , 131 (2018). C. Wang, X. Zhu, J. C. Hong, and D. Zheng, Artificial Intelligence in RadiotherapyTreatment Planning: Present and Future, Tech Canc Res Treat , 153303381987392(2019). L. Boldrini, J.-E. Bibault, C. Masciocchi, Y. Shen, and M.-I. Bittner, Deep Learning:A Review for the Radiation Oncologist, Front Oncol (2019). D. Jarrett, E. Stride, K. Vallis, and M. J. Gooding, Applications and limitations ofmachine learning in Radiat Oncol, Brit J Radiol , 20190001 (2019). K. J. Kiser, C. D. Fuller, and V. K. Reed, Artificial intelligence in Radiat Oncoltreatment planning: a brief overview, J Med Art Intel , 9–9 (2019). A. Krizhevsky, I. Sutskever, and G. E. Hinton, Imagenet classification with deep con-volutional neural networks, Adv Neur Inf Proc Syst , 1097–1105 (2012). G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, J. A.Van Der Laak, B. Van Ginneken, and C. I. S´anchez, A survey on deep learning inmedical image analysis, Med Image Anal , 60–88 (2017). D. Nie, X. Cao, Y. Gao, L. Wang, and D. Shen, Estimating CT image from MRI datausing 3D fully convolutional networks, pages 170–178 (2016). J. S. Lee, A review of deep learning-based approaches for attenuation correction inpositron emission tomography, IEEE Transactions on Radiation and Plasma MedicalSciences (2020). B. Yu, Y. Wang, L. Wang, D. Shen, and L. Zhou,
Medical Image Synthesis via DeepLearning , pages 23–44, Springer International Publishing, Cham, 2020. T. Wang, Y. Lei, Y. Fu, J. F. Wynne, W. J. Curran, T. Liu, and X. Yang, A reviewon medical imaging synthesis using deep learning and its clinical applications, J AppClin Med Phys (2020).
Last edited Date:February 5, 2021 age 36 Spadea MF & Maspero M et al Y. LeCun, Y. Bengio, and G. Hinton, Deep learning, Nature , 436–444 (2015). O. Ronneberger, P. Fischer, and T. Brox, U-net: Convolutional networks for biomedicalimage segmentation, in
International Conference on Medical image computing andcomputer-assisted intervention , pages 234–241, Springer, 2015. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair,A. Courville, and Y. Bengio, Generative adversarial nets, Advances in neural in-formation processing systems , 2672–2680 (2014). P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, Image-to-image translation with condi-tional adversarial networks, in
Proc IEEE CVPR , pages 1125–1134, 2017. X. Wu, K. Xu, and P. Hall, A survey of image synthesis and editing with generativeadversarial networks, Tsinghua Science and Technology , 660–674 (2017). A. Creswell, T. White, V. Dumoulin, K. Arulkumaran, B. Sengupta, and A. A. Bharath,Generative adversarial networks: An overview, IEEE Signal Processing Magazine ,53–65 (2018). X. Yi, E. Walia, and P. Babyn, Generative adversarial network in medical imaging: Areview, Medical image analysis , 101552 (2019). J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, Unpaired image-to-image translationusing cycle-consistent adversarial networks, in
Proceedings of the IEEE internationalconference on computer vision , pages 2223–2232, 2017. D. A. Low, Gamma dose distribution evaluation tool, , 012071 (2010). H. Paganetti, Range uncertainties in proton therapy and the role of Monte Carlosimulations, Phys Med Biol , R99 (2012). E. A. Andres et al., Dosimetry-driven quality measure of brain pseudo Computed To-mography generated from deep learning for MRI-only radiotherapy treatment planning,Int J Radiat Oncol Biol Phys , 813–823 (2020). M. Eckl, L. Hoppen, G. R. Sarria, J. Boda-Heggemann, A. Simeonova-Chergou, V. Steil,F. A. Giordano, and J. Fleckenstein, Evaluation of a cycle-generative adversarial eep learning-based sCT generation in RT and PET February 5, 2021 page 37 network-based cone-beam CT to synthetic CT conversion algorithm for adaptive ra-diation therapy, Physica Medica , 308–316 (2020). Y. Peng et al., Magnetic resonance-based synthetic computed tomography images gener-ated using generative adversarial networks for nasopharyngeal carcinoma radiotherapytreatment planning, Radiother Oncol , 217–224 (2020). P. Qian, K. Xu, T. Wang, Q. Zheng, H. Yang, A. Baydoun, J. Zhu, B. Traughber,and R. F. Muzic, Estimating CT from MR Abdominal Images Using Novel GenerativeAdversarial Networks, J Grid Comp , 1–16 (2020). K. Xu, J. Cao, K. Xia, H. Yang, J. Zhu, C. Wu, Y. Jiang, and P. Qian, Multichannelresidual conditional GAN-leveraged abdominal pseudo-CT generation via Dixon MRimages, IEEE Access , 163823–163830 (2019). C. N. Ladefoged, L. Marner, A. Hindsholm, I. Law, L. Højgaard, and F. L. Ander-sen, Deep learning based attenuation correction of PET/MRI in pediatric brain tumorpatients: Evaluation in a clinical setting, Frontiers in neuroscience , 1005 (2019). M. Maspero, L. G. Bentvelzen, M. H. Savenije, F. Guerreiro, E. Seravalli, G. O.Janssens, C. A. van den Berg, and M. E. Philippens, Deep learning-based synthetic CTgeneration for paediatric brain MR-only photon and proton radiotherapy, RadiotherOncol , 197–204 (2020). M. C. Florkow et al., Deep learning-enabled MRI-only photon and proton therapytreatment planning for paediatric abdominal tumours, Radiother Oncol , 220–227(2020). W. Jeon, H. J. An, J.-i. Kim, J. M. Park, H. Kim, K. H. Shin, and E. K. Chie,Preliminary Application of Synthetic Computed Tomography Image Generation fromMagnetic Resonance Image Using Deep-Learning in Breast Cancer Patients, J RadiatProt Res , 149–155 (2019). T. J. Bradshaw, G. Zhao, H. Jang, F. Liu, and A. B. McMillan, Feasibility of deeplearning–based PET/MR attenuation correction in the pelvis using only diagnostic MRimages, Tomography , 138 (2018). Last edited Date:February 5, 2021 age 38 Spadea MF & Maspero M et al J. Fu, K. Singhrao, M. Cao, V. Yu, A. P. Santhanam, Y. Yang, M. Guo, A. C. Raldow,D. Ruan, and J. H. Lewis, Generation of abdominal synthetic CTs from 035 T MRimages using generative adversarial networks for MR-only liver radiotherapy, BiomPhys Eng Express , 015033 (2020). Y. Li, W. Li, J. Xiong, J. Xia, and Y. Xie, Comparison of Supervised and UnsupervisedDeep Learning Methods for Medical Image Synthesis between Computed Tomographyand Magnetic Resonance Images, BioMed Research International (2020). L. Xu, X. Zeng, H. Zhang, W. Li, J. Lei, and Z. Huang, BPGAN: Bidirectional CT-to-MRI prediction using multi-generative multi-adversarial nets with spectral normaliza-tion and localization, Neural Networks , 82–98 (2020). J. Fu, Y. Yang, K. Singhrao, D. Ruan, F.-I. Chu, D. A. Low, and J. H. Lewis, Deeplearning approaches using 2D and 3D convolutional neural networks for generating malepelvic synthetic computed tomography from magnetic resonance imaging, Med Phys , 3788–3798 (2019). S. Neppl et al., Evaluation of proton and photon dose distributions recalculated on 2Dand 3D Unet-generated pseudoCTs from T1-weighted MR head scans, Acta Oncol ,1429–1434 (2019). L. Fetty, M. Bylund, P. Kuess, G. Heilemann, T. Nyholm, D. Georg, and T. L¨ofstedt,Latent space manipulation for high-resolution medical image synthesis via the Style-GAN, Zeits Med Phy (2020). L. Xiang, Q. Wang, D. Nie, L. Zhang, X. Jin, Y. Qiao, and D. Shen, Deep embeddingconvolutional neural network for synthesizing CT image from T1-Weighted MR image,Med Imag Anal , 31–44 (2018). D. Cusumano et al., A deep learning approach to generate synthetic CT in low fieldMR-guided adaptive radiotherapy for abdominal and pelvic cases, Radiother Oncol , 205–212 (2020). J. Harms, Y. Lei, T. Wang, R. Zhang, J. Zhou, X. Tang, W. J. Curran, T. Liu, andX. Yang, Paired cycle-GAN-based image correction for quantitative cone-beam com-puted tomography, Med Phys , 3998–4009 (2019). eep learning-based sCT generation in RT and PET February 5, 2021 page 39 M. Maspero, A. C. Houweling, M. H. Savenije, T. C. van Heijst, J. J. Verhoeff, A. N.Kotte, and C. A. van den Berg, A single neural network for cone-beam computedtomography-based radiotherapy of head-and-neck, lung and breast cancer, Phys ImagRadiat Oncol , 24–31 (2020). Y. Zhang, N. Yue, M.-Y. Su, B. Liu, Y. Ding, Y. Zhou, H. Wang, Y. Kuang, andK. Nie, Improving CBCT Quality to CT Level using Deep-Learning with GenerativeAdversarial Network, Med Phys (2020). M. Maspero, M. H. Savenije, A. M. Dinkla, P. R. Seevinck, M. P. Intven, I. M.Jurgenliemk-Schulz, L. G. Kerkmeijer, and C. A. van den Berg, Dose evaluation offast synthetic-CT generation using a generative adversarial network for general pelvisMR-only radiotherapy, Phys Med Biol , 185001 (2018). X. Han, MR-based synthetic CT generation using a deep convolutional neural networkmethod, Med Phys , 1408–1419 (2017). H. Emami, M. Dong, S. P. Nejad-Davarani, and C. K. Glide-Hurst, Generating syntheticCTs from magnetic resonance images using generative adversarial networks, Med Phys , 3627–3636 (2018). C.-B. Jin, H. Kim, M. Liu, W. Jung, S. Joo, E. Park, Y. S. Ahn, I. H. Han, J. I. Lee,and X. Cui, Deep CT to MR synthesis using paired and unpaired data, Sensors ,2361 (2019). Y. Lei, J. Harms, T. Wang, Y. Liu, H.-K. Shu, A. B. Jani, W. J. Curran, H. Mao, T. Liu,and X. Yang, MRI-only based synthetic CT generation using dense cycle consistentgenerative adversarial networks, Med Phys , 3565–3581 (2019). H. Yang, J. Sun, A. Carass, C. Zhao, J. Lee, J. L. Prince, and Z. Xu, Unsupervised MR-to-CT Synthesis Using Structure-Constrained CycleGAN, IEEE Trans Med Imaging , 4249–4261 (2020). H. Massa, J. Johnson, and A. McMillan, Comparison of deep learning synthesis ofsynthetic CTs using clinical MRI inputs, Phys Med Biol , NT03 (2020). Last edited Date:February 5, 2021 age 40 Spadea MF & Maspero M et al Y. Wang, C. Liu, X. Zhang, and W. Deng, Synthetic CT generation based on T2weighted MRI of nasopharyngeal carcinoma (NPC) using a deep convolutional neuralnetwork (DCNN), Front Oncol (2019). X. Tie, S.-K. Lam, Y. Zhang, K.-H. Lee, K.-H. Au, and J. Cai, Pseudo-CT gener-ation from multi-parametric MRI using a novel multi-channel multi-path conditionalgenerative adversarial network for nasopharyngeal carcinoma patients, Med Phys ,1750–1762 (2020). V. Kearney, B. P. Ziemer, A. Perry, T. Wang, J. W. Chan, L. Ma, O. Morin, S. S. Yom,and T. D. Solberg, Attention-Aware Discrimination for MR-to-CT Image TranslationUsing Cycle-Consistent Generative Adversarial Networks, Radiol: Art Intel , e190027(2020). A. Largent et al., Head-and-Neck MRI-only radiotherapy treatment planning: Fromacquisition in treatment position to pseudo-CT generation, Cancer/Radioth´erapie ,288–297 (2020). P. Su, S. Guo, S. Roys, F. Maier, H. Bhat, E. Melhem, D. Gandhi, R. Gullapalli, andJ. Zhuo, Transcranial MR Imaging–Guided Focused Ultrasound Interventions UsingDeep Learning Synthesized CT, Am J Neurorad , 1841–1848 (2020). M. C. Florkow et al., Deep learning–based MR-to-CT synthesis: The influence ofvarying gradient echo–based MR images as input channels. A. Bahrami, A. Karimian, E. Fatemizadeh, H. Arabi, and H. Zaidi, A new deep con-volutional neural network design with efficient learning capability: Application to CTimage synthesis from MRI, Med Phys , 5158–5171 (2020). Y. Liu et al., MRI-based treatment planning for proton radiotherapy: dosimetric vali-dation of a deep learning-based liver synthetic CT generation method, Phys Med Biol , 145015 (2019). L. Liu, A. Johansson, Y. Cao, J. Dow, T. S. Lawrence, and J. M. Balter, Abdominalsynthetic CT generation from MR Dixon images using a U-net trained with ’semi-synthetic’ CT data, Phys Med Biol , 125001 (2020). eep learning-based sCT generation in RT and PET February 5, 2021 page 41 A. M. Dinkla, J. M. Wolterink, M. Maspero, M. H. Savenije, J. J. Verhoeff, E. Seravalli,I. Iˇsgum, P. R. Seevinck, and C. A. van den Berg, MR-only brain radiation therapy:dosimetric evaluation of synthetic CTs generated by a dilated convolutional neuralnetwork, Int J Radiat Oncol Biol Phys , 801–812 (2018). F. Liu, P. Yadav, A. M. Baschnagel, and A. B. McMillan, MR-based treatment planningin radiation therapy using a deep learning approach, J App Clin Med Phys , 105–114(2019). S. Kazemifar, S. McGuire, R. Timmerman, Z. Wardak, D. Nguyen, Y. Park, S. Jiang,and A. Owrangi, MRI-only brain radiotherapy: Assessing the dosimetric accuracy ofsynthetic CT images generated using a deep learning approach, Radiother Oncol ,56–63 (2019). G. Shafai-Erfani et al., MRI-based proton treatment planning for base of skull tumors,Int J Part Ther , 12–25 (2019). D. Gupta, M. Kim, K. A. Vineberg, and J. M. Balter, Generation of synthetic CTimages from MRI for treatment planning and patient positioning using a 3-channelU-Net trained on sagittal images, Front Oncol , 964 (2019). M. F. Spadea, G. Pileggi, P. Zaffino, P. Salome, C. Catana, D. Izquierdo-Garcia, F. Am-ato, and J. Seco, Deep convolution neural network (DCNN) multiplane approach tosynthetic CT generation from MR images—application in brain proton therapy, Int JRadiat Oncol Biol Phys , 495–503 (2019).
Y. Koike, Y. Akino, I. Sumida, H. Shiomi, H. Mizuno, M. Yagi, F. Isohashi, Y. Seo,O. Suzuki, and K. Ogawa, Feasibility of synthetic computed tomography generated withan adversarial network for multi-sequence magnetic resonance-based brain radiotherapy,J Radiat Res , 92–103 (2020). S. Kazemifar, A. M. Barrag´an Montero, K. Souris, S. T. Rivas, R. Timmerman, Y. K.Park, S. Jiang, X. Geets, E. Sterpin, and A. Owrangi, Dosimetric evaluation of syntheticCT generated with GANs for MRI-only proton therapy treatment planning of braintumors, J App Clin Med Phys (2020).
Last edited Date:February 5, 2021 age 42 Spadea MF & Maspero M et al
S. Chen, A. Qin, D. Zhou, and D. Yan, U-net-generated synthetic CT images for mag-netic resonance imaging-only prostate intensity-modulated radiation therapy treatmentplanning, Med Phys , 5659–5665 (2018). H. Arabi, J. A. Dowling, N. Burgos, X. Han, P. B. Greer, N. Koutsouvelis, and H. Zaidi,Comparative study of algorithms for synthetic CT generation from MRI: consequencesfor MRI-guided radiation planning in the pelvic region, Med Phys , 5218–5233 (2018). Y. Liu et al., Evaluation of a deep learning-based pelvic synthetic CT generationtechnique for MRI-based prostate proton treatment planning, Phys Med Biol , 205022(2019). A. Largent et al., Comparison of deep learning-based and patch-based methods forpseudo-CT generation in MRI-based prostate dose planning, Int J Radiat Oncol BiolPhys , 1137–1150 (2019).
K. N. B. Boni, J. Klein, L. Vanquin, A. Wagner, T. Lacornerie, D. Pasquier, andN. Reynaert, MR to CT synthesis with multicenter data in the pelvic area using aconditional generative adversarial network, Phys Med Biol , 075002 (2020). L. Fetty, T. L¨ofstedt, G. Heilemann, H. Furtado, N. Nesvacil, T. Nyholm, D. Georg,and P. Kuess, Investigating conditional GAN performance with different generatorarchitectures, an ensemble model, and different MR scanners for MR-sCT conversion,Phys Med Biol , 5004 (2020). D. Bird et al., Multicentre, deep learning, synthetic-CT generation for ano-rectal MR-only radiotherapy treatment planning, Radiother Oncol , 23–28 (2021).
A. M. Dinkla, M. C. Florkow, M. Maspero, M. H. Savenije, F. Zijlstra, P. A. Doornaert,M. van Stralen, M. E. Philippens, C. A. van den Berg, and P. R. Seevinck, Dosimetricevaluation of synthetic CT for head and neck radiotherapy generated by a patch-basedthree-dimensional convolutional neural network, Med Phys , 4095–4104 (2019). P. Klages, I. Benslimane, S. Riyahi, J. Jiang, M. Hunt, J. O. Deasy, H. Veeraraghavan,and N. Tyagi, Patch-based generative adversarial neural network models for head andneck MR-only planning, Med Phys , 626–642 (2020). eep learning-based sCT generation in RT and PET February 5, 2021 page 43 M. Qi et al., Multi-sequence MR image-based synthetic CT generation using a gen-erative adversarial network for head and neck MRI-only radiotherapy, Med Phys ,1880–1894 (2020). A. Thummerer, B. A. de Jong, P. Zaffino, A. Meijers, G. G. Marmitt, J. Seco, R. J.Steenbakkers, J. A. Langendijk, S. Both, and M. F. Spadea, Comparison of the suitabil-ity of CBCT-and MR-based synthetic CTs for daily adaptive proton therapy in headand neck patients, Phys Med Biol , 235036 (2020). S. Olberg et al., Synthetic CT reconstruction using a deep spatial pyramid convolutionalframework for MR-only breast radiotherapy, Med Phys , 4135–4147 (2019). S. Kida, T. Nakamoto, M. Nakano, K. Nawa, A. Haga, J. Kotoku, H. Yamashita, andK. Nakagawa, Cone beam computed tomography image quality improvement using adeep convolutional neural network, Cureus (2018). L. Chen, X. Liang, C. Shen, S. Jiang, and J. Wang, Synthetic CT generation fromCBCT images via deep learning, Med Phys , 1115–1125 (2020). S. Kida, S. Kaji, K. Nawa, T. Imae, T. Nakamoto, S. Ozaki, T. Ohta, Y. Nozawa, andK. Nakagawa, Visual enhancement of Cone-beam CT by use of CycleGAN, Med Phys , 998–1010 (2020). N. Yuan, B. Dyer, S. Rao, Q. Chen, S. Benedict, L. Shang, Y. Kang, J. Qi, and Y. Rong,Convolutional neural network enhancement of fast-scan low-dose cone-beam CT imagesfor head and neck radiotherapy, Phys Med Biol , 035003 (2020). X. Liang, L. Chen, D. Nguyen, Z. Zhou, X. Gu, M. Yang, J. Wang, and S. Jiang, Gener-ating synthesized computed tomography (CT) from cone-beam computed tomography(CBCT) using CycleGAN for adaptive radiation therapy, Phys Med Biol , 125002(2019). Y. Li, J. Zhu, Z. Liu, J. Teng, Q. Xie, L. Zhang, X. Liu, J. Shi, and L. Chen, Apreliminary study of using a deep convolution neural network to generate synthesizedCT images based on CBCT for adaptive radiotherapy of nasopharyngeal carcinoma,Phys Med Biol , 145010 (2019). Last edited Date:February 5, 2021 age 44 Spadea MF & Maspero M et al
A. Barateau et al., Comparison of CBCT-based dose calculation methods in head andneck cancer radiotherapy: from Hounsfield unit to density calibration curve to deeplearning, Med Phys , 4683–4693 (2020). G. Landry, D. Hansen, F. Kamp, M. Li, B. Hoyle, J. Weller, K. Parodi, C. Belka,and C. Kurz, Comparing Unet training with three different datasets to correct CBCTimages for prostate radiotherapy dose calculations [J], Phys Med Biol (2019). C. Kurz, M. Maspero, M. H. Savenije, G. Landry, F. Kamp, M. Pinto, M. Li, K. Parodi,C. Belka, and C. A. Van den Berg, CBCT correction using a cycle-consistent generativeadversarial network and unpaired training to enable photon and proton dose calculation,Phys Med Biol , 225004 (2019). Y. Liu, Y. Lei, T. Wang, Y. Fu, X. Tang, W. J. Curran, T. Liu, P. Patel, and X. Yang,CBCT-based synthetic CT generation using deep-attention cycleGAN for pancreaticadaptive radiotherapy, Med Phys (2020).
A. Thummerer, P. Zaffino, A. Meijers, G. G. Marmitt, J. Seco, R. J. Steenbakkers,J. A. Langendijk, S. Both, M. F. Spadea, and A. C. Knopf, Comparison of CBCTbased synthetic CT methods suitable for proton dose calculations in adaptive protontherapy, Phys Med Biol , 095002 (2020). A. Radford, L. Metz, and S. Chintala, Unsupervised representation learning with deepconvolutional generative adversarial networks, arXiv preprint arXiv:1511.06434 (2015).
T. Karras, T. Aila, S. Laine, and J. Lehtinen, Progressive growing of gans for improvedquality, stability, and variation, arXiv preprint arXiv:1710.10196 (2017).
O. Oktay et al., Attention u-net: Learning where to look for the pancreas, arXivpreprint arXiv:1804.03999 (2018).
A. P. Leynes, J. Yang, F. Wiesinger, S. S. Kaushik, D. D. Shanbhag, Y. Seo, T. A.Hope, and P. E. Larson, Direct pseudoCT generation for pelvis PET/MRI attenuationcorrection using deep convolutional neural networks with multi-parametric MRI: zeroecho-time and dixon deep pseudoCT (ZeDD-CT), J Nuc Med , jnumed–117 (2017). eep learning-based sCT generation in RT and PET February 5, 2021 page 45
A. Baydoun et al., Dixon-based thorax synthetic CT generation using Generative Ad-versarial Network, Intelligence-Based Medicine , 100010 (2020). K. Gong, J. Yang, K. Kim, G. El Fakhri, Y. Seo, and Q. Li, Attenuation correction forbrain PET imaging using deep neural network based on Dixon and ZTE MR images,Phys Med Biol , 125011 (2018). H. Jang, F. Liu, G. Zhao, T. Bradshaw, and A. B. McMillan, Deep learning basedMRAC using rapid ultrashort echo time imaging, Med Phys , 3697–3704 (2018). A. Torrado-Carvajal, J. Vera-Olmos, D. Izquierdo-Garcia, O. A. Catalano, M. A.Morales, J. Margolin, A. Soricelli, M. Salvatore, N. Malpica, and C. Catana, Dixon-VIBE deep learning (DIVIDE) pseudo-CT synthesis for pelvis PET/MR attenuationcorrection, Journal of nuclear medicine , 429–435 (2019). P. Blanc-Durand, M. Khalife, B. Sgard, S. Kaushik, M. Soret, A. Tiss, G. El Fakhri,M.-O. Habert, F. Wiesinger, and A. Kas, Attenuation correction using 3D deep con-volutional neural network for brain 18F-FDG PET/MR: Comparison with Atlas, ZTEand CT based attenuation correction, PloS one , e0223141 (2019). K. Gong, P. K. Han, K. A. Johnson, G. El Fakhri, C. Ma, and Q. Li, Attenuation correc-tion using deep Learning and integrated UTE/multi-echo Dixon sequence: evaluationin amyloid and tau PET imaging, Eur J Nucl Med Mol Imaging , 1–11 (2020).
A. Pozaruk, K. Pawar, S. Li, A. Carey, J. Cheng, V. P. Sudarshan, M. Cholewa,J. Grummet, Z. Chen, and G. Egan, Augmented deep learning model for improvedquantitative accuracy of MR-based PET attenuation correction in PSMA PET-MRIprostate imaging, Eur J Nucl Med Mol Imaging (2020).
K. Gong, J. Yang, P. E. Larson, S. C. Behr, T. A. Hope, Y. Seo, and Q. Li, MR-basedattenuation correction for brain PET using 3D cycle-consistent adversarial network,IEEE Transactions on Radiation and Plasma Medical Sciences (2020).
F. Liu, H. Jang, R. Kijowski, T. Bradshaw, and A. B. McMillan, Deep learning MRimaging–based attenuation correction for PET/MR imaging, Radiology , 676–684(2018).
Last edited Date:February 5, 2021 age 46 Spadea MF & Maspero M et al
H. Arabi, G. Zeng, G. Zheng, and H. Zaidi, Novel adversarial semantic structure deeplearning for MRI-guided attenuation correction in brain PET/MRI, Eur J Nucl MedMol Imaging , 2746–2759 (2019). K. D. Spuhler, J. Gardus, Y. Gao, C. DeLorenzo, R. Parsey, and C. Huang, Synthe-sis of patient-specific transmission data for PET attenuation correction for PET/MRIneuroimaging using a convolutional neural network, J Nucl Med , 555–560 (2019). F. Liu, H. Jang, R. Kijowski, G. Zhao, T. Bradshaw, and A. B. McMillan, A deeplearning approach for 18 F-FDG PET attenuation correction, EJNMMI physics ,1–15 (2018). X. Dong, T. Wang, Y. Lei, K. Higgins, T. Liu, W. J. Curran, H. Mao, J. A. Nye,and X. Yang, Synthetic CT generation from non-attenuation corrected PET images forwhole-body PET imaging, Phys Med Biol , 215016 (2019). K. Armanious, T. Hepp, T. K¨ustner, H. Dittmann, K. Nikolaou, C. La Foug`ere, B. Yang,and S. Gatidis, Independent attenuation correction of whole body [18 F] FDG-PET us-ing a deep learning approach with Generative Adversarial Networks, EJNMMI research , 1–9 (2020). K. Simonyan and A. Zisserman, Very deep convolutional networks for large-scale imagerecognition, arXiv preprint arXiv:1409.1556 (2014).
K. He, X. Zhang, S. Ren, and J. Sun, Deep residual learning for image recognition, in
Proceedings of the IEEE conference on computer vision and pattern recognition , pages770–778, 2016.
J. J. Van Dyk, editor,
The Modern Technology of Radiation Oncology , volume 4, MedicalPhysics Publisher, 2020.
B. Stemkens, E. S. Paulson, and R. H. Tijssen, Nuts and bolts of 4D-MRI for radio-therapy, Phys Med Biol , 21TR01 (2018). C. Paganelli et al., MRI-guidance for motion management in external beam radiother-apy: current status and future challenges, Phys Med Biol , 22TR03 (2018). eep learning-based sCT generation in RT and PET February 5, 2021 page 47 J. N. Freedman, H. E. Bainbridge, S. Nill, D. J. Collins, M. Kachelrieß, M. O. Leach,F. McDonald, U. Oelfke, and A. Wetscherek, Synthetic 4D-CT of the thorax for treat-ment plan adaptation on MR-guided radiotherapy systems, Phys Med Biol , 115005(2019). T. R. Goodman, A. Mustafa, and E. Rowe, Pediatric CT radiation exposure: where wewere, and where we are now, Pediatric Radiol , 469–478 (2019). J. Boda-Heggemann, F. Lohr, F. Wenz, M. Flentje, and M. Guckenberger, kV cone-beam CT-based IGRT, Strahlen Onkol , 284–291 (2011).
U. V. Elstrøm, L. P. Muren, J. B. Petersen, and C. Grau, Evaluation of image qualityfor different kV cone-beam CT acquisition and reconstruction methods in the head andneck region, Acta Oncologica , 908–917 (2011). M. Peroni, D. Ciardo, M. F. Spadea, M. Riboldi, S. Comi, D. Alterio, G. Baroni, andR. Orecchia, Automatic segmentation and online virtualCT in head-and-neck adaptiveradiation therapy, Int J Radiat Oncol Biol Phys , e427–e433 (2012). C. Veiga, J. Alshaikhi, R. Amos, A. M. Louren¸co, M. Modat, S. Ourselin, G. Royle, andJ. R. McClelland, Cone-beam computed tomography and deformable registration-based“dose of the day” calculations for adaptive proton therapy, Int J Part Ther , 404–414(2015). Y.-K. Park, G. C. Sharp, J. Phillips, and B. A. Winey, Proton dose calculation onscatter-corrected CBCT image: Feasibility study for adaptive proton therapy, MedPhys , 4449–4459 (2015). C. Kurz, R. Nijhuis, M. Reiner, U. Ganswindt, C. Thieke, C. Belka, K. Parodi, andG. Landry, Feasibility of automated proton therapy plan adaptation for head and necktumors using cone beam CT images, Radiat Oncol , 1–9 (2016). K. Arai et al., Feasibility of CBCT-based proton dose calculation using a histogram-matching algorithm in proton beam therapy, Physica Medica , 68–76 (2017). C. Gom`a, I. P. Almeida, and F. Verhaegen, Revisiting the single-energy CT calibrationfor proton therapy treatment planning: a critical look at the stoichiometric method,Phys Med Biol , 235011 (2018). Last edited Date:February 5, 2021 age 48 Spadea MF & Maspero M et al
J. Harms, Y. Lei, T. Wang, M. McDonald, B. Ghavidel, W. Stokes, W. J. Curran,J. Zhou, T. Liu, and X. Yang, Cone-beam CT-derived relative stopping power mapgeneration via deep learning for proton radiotherapy, Med Phys , 4416–4427 (2020). D. Izquierdo-Garcia, S. J. Sawiak, K. Knesaurek, J. Narula, V. Fuster, J. Machac, andZ. A. Fayad, Comparison of MR-based attenuation correction and CT-based attenuationcorrection of whole-body PET/MR imaging, European journal of nuclear medicine andmolecular imaging , 1574–1584 (2014). I. Shiri, H. Arabi, P. Geramifar, G. Hajianfar, P. Ghafarian, A. Rahmim, M. R. Ay,and H. Zaidi, Deep-JASC: joint attenuation and scatter correction in whole-body 18F-FDG PET using a deep residual network, European Journal of Nuclear Medicine andMolecular Imaging (2020).
C. Shorten and T. M. Khoshgoftaar, A survey on image data augmentation for deeplearning, Journal of Big Data , 1–48 (2019). Z. Li, K. Kamnitsas, and B. Glocker, Overfitting of neural nets under class imbalance:Analysis and improvements for segmentation, in
International Conference on MedicalImage Computing and Computer-Assisted Intervention , pages 402–410, Springer, 2019.
Z. Hang, G. Orazio, F. Iuri, and K. Jan, Loss Functions for Neural Networks for ImageProcessing, CoRR abs/1511.08861 (2015).
M. C. Florkow, F. Zijlstra, L. G. Kerkmeijer, M. Maspero, C. A. van den Berg, M. vanStralen, and P. R. Seevinck, The impact of MRI-CT registration errors on deep learning-based synthetic CT generation, in
Medical Imaging 2019: Image Processing , volume10949, page 1094938, International Society for Optics and Photonics, 2019.
J. M. Wolterink, A. M. Dinkla, M. H. Savenije, P. R. Seevinck, C. A. van den Berg,and I. Iˇsgum, Deep MR to CT synthesis using unpaired data, in
Int Work SASHIMI ,pages 14–23, Springer, 2017.
A. Rehman and F. G. Khan, A deep learning based review on abdominal images,Multimedia Tools and Applications , 1–32 (2020). eep learning-based sCT generation in RT and PET February 5, 2021 page 49
S. P. Singh, L. Wang, S. Gupta, H. Goli, P. Padmanabhan, and B. Guly´as, 3D deeplearning on medical images: a review, Sensors , 5097 (2020). K. Kamnitsas, E. Ferrante, S. Parisot, C. Ledig, A. V. Nori, A. Criminisi, D. Rueckert,and B. Glocker, DeepMedic for brain tumor segmentation, in
International workshopon Brainlesion: Glioma, multiple sclerosis, stroke and traumatic brain injuries , pages138–149, Springer, 2016.
J. Schlemper, O. Oktay, M. Schaap, M. Heinrich, B. Kainz, B. Glocker, and D. Rueckert,Attention gated networks: Learning to leverage salient regions in medical images, MedImage Anal , 197–207 (2019). P. Keeling, J. Clark, and S. Finucane, Challenges in the clinical implementation ofprecision medicine companion diagnostics, Expert review of molecular diagnostics ,593–599 (2020). J. Bertholet et al., Patterns of practice for adaptive and real-time radiation therapy(POP-ART RT) part II: Offline and online plan adaption for interfractional changes,Radiother Oncol , 88–96 (2020).
E. Palm´er, A. Karlsson, F. Nordstr¨om, K. Petruson, C. Siversson, M. Ljungberg, andM. Sohlin, Synthetic computed tomography data allows for accurate absorbed dosecalculations in a magnetic resonance imaging only workflow for head and neck radio-therapy, Phys Imag Radiat Oncol , 36–42. Council of European Union, Regulation (EU) 2017/745 of the European Parliamentand of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC,Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing CouncilDirectives 90/385/EEC and 93/42/EEC, 2017, http://data.europa.eu/eli/reg/2017/745/oj . C. Fiorino, M. Guckenberger, M. Schwarz, U. A. van der Heide, and B. Heijmen,Technology-driven research for radiotherapy innovation, Mol Oncol , 1500–1513(2020). V. Liesbeth et al., Overview of artificial intelligence-based applications in radiotherapy:recommendations for implementation and quality assurance, Radiother Oncol (2020).
Last edited Date:February 5, 2021 age 50 Spadea MF & Maspero M et al
X. Liu, S. C. Rivera, D. Moher, M. J. Calvert, and A. K. Denniston, Reportingguidelines for clinical trial reports for interventions involving artificial intelligence: theCONSORT-AI extension, Brit Med J (2020).
S. Mutic, J. R. Palta, E. K. Butker, I. J. Das, M. S. Huq, L.-N. D. Loo, B. J. Salter,C. H. McCollough, and J. Van Dyk, Quality assurance for computed-tomography sim-ulators and the computed-tomography-simulation process: report of the AAPM Radi-ation Therapy Committee Task Group No. 66, Med Phys , 2762–2792 (2003). R. R. Gallas, N. H¨unemohr, A. Runz, N. I. Niebuhr, O. J¨akel, and S. Greilich, Ananthropomorphic multimodality (CT/MRI) head phantom prototype for end-to-endtests in ion radiotherapy, Zeitsch Mediz Phys , 391–399 (2015). N. Niebuhr, W. Johnen, G. Echner, A. Runz, M. Bach, M. Stoll, K. Giske, S. Greilich,and A. Pfaffenberger, The ADAM-pelvis phantom—an anthropomorphic, deformableand multimodal phantom for MRgRT, Phys Med Biol , 04NT05 (2019). K. Singhrao, J. Fu, H. H. Wu, P. Hu, A. U. Kishan, R. K. Chin, and J. H. Lewis,A novel anthropomorphic multimodality phantom for MRI-based radiotherapy qualityassurance testing, Med physics , 1443–1451 (2020). E. Colvill et al., Anthropomorphic phantom for deformable lung and liver CT and MRimaging for radiotherapy, Phys Med Biol , 07NT02 (2020). X. Chen, K. Men, B. Chen, Y. Tang, T. Zhang, S. Wang, Y. Li, and J. Dai, CNN-basedquality assurance for automatic segmentation of breast cancer in radiotherapy, FrontOncol (2020). F. J. Bragman, R. Tanno, Z. Eaton-Rosen, W. Li, D. J. Hawkes, S. Ourselin, D. C.Alexander, J. R. McClelland, and M. J. Cardoso, Uncertainty in multitask learning:joint representations for probabilistic MR-only radiotherapy planning, in
InternationalConference on Medical Image Computing and Computer-Assisted Intervention , pages3–11, Springer, 2018.
M. Hemsley, B. Chugh, M. Ruschin, Y. Lee, C.-L. Tseng, G. Stanisz, and A. Lau, DeepGenerative Model for Synthetic-CT Generation with Uncertainty Predictions, in
Inter- eep learning-based sCT generation in RT and PET February 5, 2021 page 51 national Conference on Medical Image Computing and Computer-Assisted Intervention ,pages 834–844, Springer, 2020.
M. Abdar et al., A review of uncertainty quantification in deep learning: Techniques,applications and challenges, arXiv preprint arXiv:2011.06225 (2020).
D. Kawahara, A. Saito, S. Ozawa, and Y. Nagata, Image synthesis with deep convo-lutional generative adversarial networks for material decomposition in dual-energy CTfrom a kilovoltage CT, Comp Biol Med , 104111 (2020).
L. B. Jans, M. Chen, D. Elewaut, F. Van den Bosch, P. Carron, P. Jacques, R. Wittoek,J. L. Jaremko, and N. Herregods, MRI-based synthetic CT in the detection of structurallesions in patients with suspected sacroiliitis: comparison with MRI, Radiol , 201537(2020).