Generative Adversarial Training for MRA Image Synthesis Using Multi-Contrast MRI
GGenerative Adversarial Training for MRA ImageSynthesis Using Multi-Contrast MRI
Sahin Olut, Yusuf H. Sahin, Ugur Demir, Gozde Unal
ITU Vision LabComputer Engineering DepartmentIstanbul Technical University{oluts, sahinyu, ugurdemir, gozde.unal}@itu.edu.tr
Abstract
Magnetic Resonance Angiography (MRA) has become an essential MR contrast forimaging and evaluation of vascular anatomy and related diseases. MRA acquisitionsare typically ordered for vascular interventions, whereas in typical scenarios, MRAsequences can be absent in the patient scans. This motivates the need for a techniquethat generates inexistent MRA from existing MR multi-contrast, which could bea valuable tool in retrospective subject evaluations and imaging studies. In thispaper, we present a generative adversarial network (GAN) based technique togenerate MRA from T1-weighted and T2-weighted MRI images, for the first timeto our knowledge. To better model the representation of vessels which the MRAinherently highlights, we design a loss term dedicated to a faithful reproduction ofvascularities. To that end, we incorporate steerable filter responses of the generatedand reference images inside a Huber function loss term. Extending the well-established generator-discriminator architecture based on the recent PatchGANmodel with the addition of steerable filter loss, the proposed steerable GAN (sGAN)method is evaluated on the large public database IXI. Experimental results showthat the sGAN outperforms the baseline GAN method in terms of an overlap scorewith similar PSNR values, while it leads to improved visual perceptual quality.
Due to recent improvements in hardware and software technologies of Magnetic Resonance Imaging(MRI) as well its non-invasive nature, the use of MRI has become ubiquitous in examination andevaluation of patients in hospitals. Whereas the most common MRI sequences are T1-weighted andT2-weighted MRI, which are acquired routinely in imaging protocols for studying in vivo tissuecontrast and anatomical structures of interest, Non-Contrast Enhanced (NCE) time-of-flight (TOF)MR Angiography (MRA) has become established as a non-invasive modality for evaluating vasculardiseases throughout intracranial, peripheral, abdominal, renal and thoracic imaging procedures[8, 17, 26]. Early detection of vessel abnormalities has vast importance in treatment of aneurysms,stenosis or evaluating risk of rupture and hemorrhage, which can be life-threatening or fatal. MRAtechnique has shown a high sensitivity of . in detection of hemorrhage as reported in [28]. Highaccuracy values of in detection of vessel abnormalities like aneurysms using MRA are reportedin [32]. In addition to NCE-MRA, Contrast-Enhanced MRA is found to improve assessment ofmorphological differences, while no differences were noted between NCE-MRA and CE-MRA indetection and localization of aneurysms [3]. CE-MRA is less preferred in practice due to concernsover safety of contrast agents and risks for patients as well as increased acquisition time and costs.Furthermore, when compared to the invasive modality Digital Subtraction Angiography (DSA), whichis considered to be the gold standard in detection and planning of endovascular treatments, MRA isreported to present statistically similar accuracy and specificity, with a slightly reduced sensitivity a r X i v : . [ c s . C V ] A p r n detection of intracranial aneurysms [34]. As MRA is free of ionization exposure effects, it is adesired imaging modality for study of vasculature and its related pathologies.In a majority of the MRI examinations, T1-weighted and T2-weighted MRI contrast sequences arethe main structural imaging sequences. Unless specifically required by endovascular concerns, MRAimages are often absent due to lower cost and shorter scan time considerations. When a need for aretrospective inspection of vascular structures arises, generation of the missing MRA contrast basedon the available contrast could be a valuable tool in the clinical examinations.Recent advances in machine learning, particularly emergence of convolutional neural networks(CNNs), have led to an increased interest in their application to medical image computing problems.CNNs showed a great potential in medical image analysis tasks like brain tumor segmentation andlesion detection [7, 14]. In addition to classification and segmentation related tasks, recently, deepunsupervised methods in machine learning have started to be successfully applied to reconstruction[35], image generation and synthesis problems [2]. In the training stages of such techniques, thenetwork learns to represent the probability distributions of the available data in order to generate newor missing samples from the learned model. The main purpose of this work is to employ those imagegenerative networks to synthesize a new MRI contrast from the other existing multi-modal MRIcontrast. Our method relies on well-established idea of generative adversarial networks (GANs) [6].The two main contributions of this paper can be summarized as follows: • We provide a GAN framework for generation of MRA images from T1 and T2 images, forthe first time to our knowledge. • We present a dedicated new loss term, which measures fidelity of directional features ofvascular structures, for an increased performance in MRA generation.
Various methods have been proposed to generate images and/or their associated image maps. Someexamples in medical image synthesis are given by Zaidi et al . [36] who proposed a segmentationbased technique for reconstruction and refinement of MR images. Catana et al . suggested an atlasbased method to estimate CT using MRI attenuation maps [1]. However, as the complexity of theproposed models was not capable to learn an end-to-end mapping, the performance of those modelswere limited [25]. Here we refer to relatively recent techniques based on convolutional deep neuralnetworks.
Image synthesis , which is also termed as image-to-image translation, relied on auto-encoders or itsvariations like denoising auto-encoders [30], variational auto-encoders [16]. Those techniques oftenlead to blurry or not adequately sharpened outputs because of their classical loss measure, which isbased on the standard Euclidean (L2) distance or L1 distance between the target and produced outputimages [23]. Generative adversarial networks (GANs) [6] address this issue by adding a discriminatorto the network in order to perform adversarial training. The goal is to improve the performance of thegenerator in learning a realistic data distribution while trying to counterfeit the discriminator. GANslearn a function which maps a noise vector z to a target image y . On the other hand, to producea mapping from an image to another image, GANs can be conditioned [24]. Conditional GANs(cGANs) learn a mapping G : x, z → y , by adding the input image vector x to the same framework.cGANs are more suitable for image translation tasks since the conditioning vector can provide vastamount of information to the networks.Numerous works have been published on GANs. DCGAN [27] used convolutional and fractionallystrided convolutional layers (a.k.a. transposed convolution) [37] and batch normalization [10] inorder to improve the performance of adversarial training. Recently, inspired from the Markovrandom fields [21], PatchGAN technique, which slides a window over the input image, evaluates andaggreagates realness of patches, is proposed [11]. Isola et al ’s PatchGAN method, also known as pix2pix , is applied to various problems in image-to-image translation such as sketches to photos,photos to maps, various maps (e.g. edges) to photos, day to night pictures and so on. Medical image synthesis is currently an emerging area of interest for application of the latest imagegeneration techniques mentioned above. Wolterink et al. [33] synthesized Computed Tomography(CT) images from T1-weighted MR images using the cyclic loss proposed in the CycleGAN technique[38]. Using 3D fully convolutional networks and available contrasts such as T1, T2, PD, T2SE images,2 .. ResNet Generator
Real ImageGenerated Image
Patch Discriminator ...
Steerable FiltersHuber Loss Adversarial LossL1 Loss Real Image ......
Figure 1: The sGAN architecture. ResNet generator takes concatenation of T1- and T2- weightedMR images and transforms them into an MRA image slice. The quality of the generated image ismeasured with three loss functions: (i) Adversarial loss generated by PatchGAN discriminator ; (ii)Reconstruction loss which evaluates pixel-wise similarity between the original and generated MRA;(iii) Steerable filter responses through a huber loss function.Wei et al . [31] synthesized the associated FLAIR image contrast. Nie and Trullo et al . [25] proposeda context-aware technique for medical image synthesis, where they added a gradient difference asa loss term to the generator to emphasize edges. Similarly, [4] utilized CycleGAN and pix2pix technique in generating T1-weighted MR contrast from T2-weighted MR contrast or vice versa.In this paper, we create a pipeline for generating MR Angiography (MRA) contrast based on multipleMRI contrast, particularly the joint T1-weighted and T2-weighted MRI using cGAN framework.As MRA imaging mainly targets visualization of vasculature, we modify the cGAN in order toadapt it to the MRA generation by elucidating vessel structures through a new loss term in itsobjective function. We are inspired from the steerable filters which arose from the idea of orientationselective excitations in the human visual cortex. Steerable filters involve a set of detectors at differentorientations [18]. Orientation selective convolution kernels are utilized in various image processingtasks such as enhancement, feature extraction, and vesselness filtering [5, 19]. In our work, theaddition of the steerable filter responses to the cGAN objective tailors the generator features to bothreveal and stay faithful to vessel-like structures, as will be demonstrated.
The proposed method for generating a mapping from T1- and T2- weighted MRI to MRA images,which is named as steerable filter GAN (sGAN), is illustrated in Figure 1. The generator and thediscriminator networks are conditioned on T1- and T2-weighted MRI, which are fed to the networkas two channels of the input. The details of the proposed architecture are described next.3 .1 Generator network
An encoder-decoder type generator network with residual blocks [9] that is similar to the architectureintroduced in [13] is adopted in sGAN. Our network consists of 3 down-sampling layers with stridedconvolutions of stride 2, which is followed by 9 residual blocks. In residual blocks, the channelsize of input and output are the same. At the up-sampling part, 3 convolutions with fractionalstrides are utilized. All convolutional layers except the last up-sampling layer are followed by batchnormalization [10] and ReLU activation. In the last layer, tanh activation without a normalizationlayer is used.
In a GAN setting, the adversarial loss obtained from the discriminator network D forces the generatornetwork G to produce sharper images, while it updates itself to distinguish real images from syntheticimages. As shown in [11], the Markovian discriminator (PatchGAN) architecture leads to morerefined outputs with detailed texture as both the input is divided to patches and the network evaluatespatches instead of the whole image at once. Our discriminator architecture consists of 3 downsamplinglayers with strides of 2 which are followed by 2 convolutional layers. In the discriminator network,convolutional layers are followed by batch normalization and LReLU [22] activation.
In sGAN, we employ three different objective functions to optimize parameters of our network.
Adversarial loss , which is based on the original cGAN framework, is defined as follows: L GAN ( G, D ) = E x,y [log D ( x, y )] + E x [log(1 − D ( x, G ( x ))] (1)where G is generator network and D is discriminator network, x is the two channel input consistingof T1-weighted and T2-weighted MR images, G ( x ) is the generated MRA image, and y is thereference (target) MRA image, respectively. We utilize the PatchGAN approach, where similarly, theadversarial loss evaluates whether its input patch is real or synthetically generated [11]. The generatoris trained with L adv which consists of the second term in Equation 1. Reconstruction loss helps the network to capture global appereance characteristics as well asrelatively coarse features of the target image in the reconstructed image. For that purpose, we utilizethe L distance, which is calculated as the absolute differences between the synthesized output andthe target images: L rec = || y − ˆ y || (2)where y is the target, ˆ y = G ( x ) is the produced output. Steerable filter response loss
As MR Angiography specifically targets imaging of the vascular anatomy, faithful reproduction ofvessel structures is of utmost importance. Recently, it is shown that variants of GANs with additionalloss terms geared towards the applied problem can achieve improved performance compared to theconventional GANs [20]. Hence, we design a loss term that emphasizes vesselness properties in theoutput images. We resort to steerable filters that are orientation selective convolutional kernels toextract directional image features tuned to vasculature. In order to increase the focus of the generatortowards vessels, we propose the following dedicated loss term which further incorporates a Huberfunction loss involving a combination of an L1 and L2 distance between steerable filter responses ofthe target image and the synthesized output: L steer = 1 K K (cid:88) k =1 ρ ( f k ∗ y, f k ∗ ˆ y ) (3)where ∗ denotes the convolution operator, K is the number of filters, f k is the k th steerable filterkernel. The Huber function with its parameter set to unity is defined as: ρ ( x, y ) = (cid:26) ( x − y ) ∗ . if | x − y | ≤ | x − y | − . otherwise × ); Bottom row: two examplesof steerable filter responses (k=7, 18) to the input MRA image on the left.Figure 2 depicts the K=20 steerable filters of size × . We also show sample filter responses to anMRA image to illustrate different characteristics highlighted by the steerable filters.In the sGAN setting, the overall objective is defined as follows: L = λ L adv + λ L rec + λ L steer (4)where L adv , L rec , L steer refer to Equations 1,2,3, respectively, with corresponding weights λ , λ , λ . We use IXI dataset which includes 578 MR Angiography and T1- and T2-weighted images. As theimages are not registered, we used rigid registration provided by FSL software [29] to register T1-and T2- contrast to MRA. In training, we utilize 400 MRA volumes of size 512 x 512 x 100, andrandomly selected additional 40 volumes are used for testing.The MRA scans in the dataset have higher spatial resolution in the axial plan, therefore, the sGANarchitecture is slice-based. The image slices in a volume are normalized according to the mean andstandard deviation of the whole brain. The sGAN model is trained for 50 epochs and learning rate islinearly decreased after 30 epochs. The parameters used in the model are: learning rate . , lossterm constants λ = 0 . , λ = 0 . and λ = 0 . in Equation 4. Adam [15] optimizer is usedwith β = 0 . and β = 0 . . PyTorch framework is used in all of our experiments which are runon NVIDIA TM Tesla K80 GPUs. We trained both models for a week. A feedforward pass for eachbrain generation takes about 10 seconds.
We utilize two different measures for performance evaluation. First one is the peak signal-to-noiseratio (PSNR) which is defined by
P SN R = 10 log (max y ) n (cid:80) ni ( y i − ˆ y i ) , where n is the number of pixels in an image. The PSNR is calculated between the original MRA andthe generated MRA images. http://brain-development.org/ixi-dataset http://pytorch.org
5n the MRA modality generation, it is important to synthesize vessel structures correctly. We utilizeDice score as the second measure in order to highlight the fidelity of the captured vascular anatomyin the synthesized MRA images. The Dice score is defined by
Dice ( y, ˆ y ) = 2 | y ∩ ˆ y || y | + | ˆ y | . In order to calculate the Dice score, the segmentation maps are produced by an automatic vesselsegmentation algorithm presented in [12] over both the original MRA images and the generatedMRA images using the same set of parameters in the segmentation method. As reference vascularsegmentation maps are not available in the IXI dataset, we calculated the Dice score as the overlapbetween the vasculature segmented on the original images and the vasculature segmented on thegenerated images.
To our knowledge, no previous works attempted synthetic MRA generation. To evaluate our results,we compare the generated MRA images corresponding to the baseline, which is the PatchGAN withResNet architecture against the sGAN, which is the baseline with added steerable loss term. ThePSNR and Dice scores are tabulated in Table 1.Method PSNR (dB) Dice Score (%)Baseline: L adv + L rec L adv + L rec + L steer Table 1: Performance measures (mean PSNR and mean Dice scores) on the test set: first rowcorresponds to the baseline PatchGAN; second row shows the sGAN results.
We show sample visual results of representative slices in Figure 3. Sample 3D visual results are givenas surface renderings of segmentation maps in Figure 4.
MRA is based on different relaxation properties of moving spins in flowing blood inside vessels,compared to those of static spins found in other tissue. The presented sGAN method is a data-drivenapproach to generation of MRA contrast, from the multi-contrast T1- and T2-weighted MRI, whichare based on spin-lattice and spin-spin relaxation effects. It is possible to include other available MRcontrast in patient scans such as Proton Density, FLAIR, and so on, as additional input channels tothe sGAN network.The sGAN relies on the recent popular PatchGAN framework as the baseline. In the adaptationof the baseline method to MRA generation, the steerable-filter response based loss term includedin the sGAN method highlights the directional features of vessel structures. This leads to anenhanced smoothing along vessels while improving their continuity. This is demonstrated qualitativelythrough visual inspection. In quantitative evaluations, the sGAN performs similarly with a slightincrease (statistically insignificant) in PSNR values compared to those of the baseline. However, itis well-known that PSNR measure does not necessarily correspond to perceptual quality in imageevaluations [20, 23]. In terms of the vascular segmentation maps extracted from the generated MRAsand the original MRA, the sGAN improves the overlap scores by against the baseline. This is adesirable output, as the MRA targets imaging of vascular anatomy.The presented sGAN method involves 2D slice generation. This choice is based on the native axialacquisition plan of the MRA sequences, hence the generated MRA has expectedly higher resolutionin the axial plane. Our future work includes extension of sGAN to a fully 3D architecture. Makinguse of 3D neighborhood information, both in generator networks and 3D steerable filter responses isexpected to increase the continuity of vessels in 3D.6aseline GAN sGAN Ground TruthFigure 3: Visual comparison of generated 2D MRA axial slices to the original MRA slices by boththe baseline and the sGAN methods. The last row shows a sagittal slice.7igure 4: Visual comparison of segmentation maps over generated MRA to those over the originalMRA in surface rendering format using both the baseline and the sGAN methods.The proposed sGAN has the potential to be useful in retrospective studies of existing MR imagedatabases that lack MRA contrast. Furthermore, after extensive validation, it could lead to cost andtime effectiveness where it is needed, by construction of the MRA based on relatively more commonsequences such as T1- and T2-weighted MR contrast. References [1] Ciprian Catana, Andre van der Kouwe, Thomas Benner, Christian J Michel, Michael Hamm,Matthias Fenchel, Bruce Fischl, Bruce Rosen, Matthias Schmand, and A Gregory Sorensen.Toward implementing an mri-based pet attenuation-correction method for neurologic studies onthe mr-pet brain prototype.
Journal of Nuclear Medicine , 51(9):1431–1438, 2010.[2] Agisilaos Chartsias, Thomas Joyce, Mario Valerio Giuffrida, and Sotirios A Tsaftaris. Multi-modal mr synthesis via modality-invariant latent representation.
IEEE transactions on medicalimaging , 37(3):803–814, 2018.[3] Mario Cirillo, Francesco Scomazzoni, Luigi Cirillo, Marcello Cadioli, Franco Simionato,Antonella Iadanza, Miles Kirchin, Claudio Righi, and Nicoletta Anzalone. Comparison of3d tof-mra and 3d ce-mra at 3 t for imaging of intracranial aneurysms.
European journal ofradiology , 82(12):e853–e859, 2013.[4] Salman Ul Hassan Dar, Mahmut Yurt, Levent Karacan, Aykut Erdem, Erkut Erdem, and TolgaÇukur. Image synthesis in multi-contrast mri with conditional generative adversarial networks. arXiv preprint arXiv:1802.01221 , 2018.[5] William T Freeman, Edward H Adelson, et al. The design and use of steerable filters.
IEEETransactions on Pattern analysis and machine intelligence , 13(9):891–906, 1991.[6] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, SherjilOzair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In
Advances in neuralinformation processing systems , pages 2672–2680, 2014.87] Hayit Greenspan, Bram van Ginneken, and Ronald M Summers. Guest editorial deep learning inmedical imaging: Overview and future promise of an exciting new technique.
IEEE Transactionson Medical Imaging , 35(5):1153–1159, 2016.[8] Michael P Hartung, Thomas M Grist, and Christopher J François. Magnetic resonance angiog-raphy: current status and future directions.
Journal of Cardiovascular Magnetic Resonance ,13(1):19, 2011.[9] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for imagerecognition. In
Proceedings of the IEEE conference on computer vision and pattern recognition ,pages 770–778, 2016.[10] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network trainingby reducing internal covariate shift. arXiv preprint arXiv:1502.03167 , 2015.[11] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation withconditional adversarial networks. arXiv preprint , 2017.[12] Tim Jerman, Franjo Pernuš, Boštjan Likar, and Žiga Špiclin. Enhancement of vascular structuresin 3d and 2d angiographic images.
IEEE transactions on medical imaging , 35(9):2107–2118,2016.[13] Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transferand super-resolution. In
European Conference on Computer Vision , 2016.[14] Konstantinos Kamnitsas, Christian Ledig, Virginia FJ Newcombe, Joanna P Simpson, Andrew DKane, David K Menon, Daniel Rueckert, and Ben Glocker. Efficient multi-scale 3d cnn withfully connected crf for accurate brain lesion segmentation.
Medical image analysis , 36:61–78,2017.[15] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 , 2014.[16] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprintarXiv:1312.6114 , 2013.[17] Andrew JM Kiruluta and R Gilberto González. Magnetic resonance angiography: physicalprinciples and applications. In
Handbook of clinical neurology , volume 135, pages 137–149.Elsevier, 2016.[18] Hans Knutsson, Roland Wilson, and Gösta Granlund. Anisotropic nonstationary image estima-tion and its applications: Part i–restoration of noisy images.
IEEE Transactions on Communica-tions , 31(3):388–397, 1983.[19] Thomas Markus Koller, Guido Gerig, Gabor Szekely, and Daniel Dettwiler. Multiscale detectionof curvilinear structures in 2-d and 3-d image data. In
Computer Vision, 1995. Proceedings.,Fifth International Conference on , pages 864–869. IEEE, 1995.[20] Christian Ledig, Lucas Theis, Ferenc Huszár, Jose Caballero, Andrew Cunningham, AlejandroAcosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, et al. Photo-realisticsingle image super-resolution using a generative adversarial network. arXiv preprint , 2016.[21] Chuan Li and Michael Wand. Precomputed real-time texture synthesis with markovian gen-erative adversarial networks. In
European Conference on Computer Vision , pages 702–716.Springer, 2016.[22] Andrew L Maas, Awni Y Hannun, and Andrew Y Ng. Rectifier nonlinearities improve neuralnetwork acoustic models. In
Proc. icml , volume 30, page 3, 2013.[23] Michael Mathieu, Camille Couprie, and Yann LeCun. Deep multi-scale video prediction beyondmean square error. arXiv preprint arXiv:1511.05440 , 2015.[24] Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv preprintarXiv:1411.1784 , 2014.[25] Dong Nie, Roger Trullo, Jun Lian, Caroline Petitjean, Su Ruan, Qian Wang, and Dinggang Shen.Medical image synthesis with context-aware generative adversarial networks. In
InternationalConference on Medical Image Computing and Computer-Assisted Intervention , pages 417–425.Springer, 2017.[26] Dwight G Nishimura. Time-of-flight mr angiography.
Magnetic resonance in medicine ,14(2):194–201, 1990. 927] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning withdeep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 , 2015.[28] Anna MH Sailer, Janneke P Grutters, Joachim E Wildberger, Paul A Hofman, Jan T Wilmink,and Willem H van Zwam. Cost-effectiveness of cta, mra and dsa in patients with non-traumaticsubarachnoid haemorrhage.
Insights into imaging , 4(4):499–507, 2013.[29] Stephen M Smith, Mark Jenkinson, Mark W Woolrich, Christian F Beckmann, Timothy EJBehrens, Heidi Johansen-Berg, Peter R Bannister, Marilena De Luca, Ivana Drobnjak, David EFlitney, et al. Advances in functional and structural mr image analysis and implementation asfsl.
Neuroimage , 23:S208–S219, 2004.[30] Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. Extracting andcomposing robust features with denoising autoencoders. In
Proceedings of the 25th internationalconference on Machine learning , pages 1096–1103. ACM, 2008.[31] Wen Wei, Emilie Poirion, Benedetta Bodini, Stanley Durrleman, Olivier Colliot, Bruno Stankoff,and Nicholas Ayache. FLAIR MR Image Synthesis By Using 3D Fully Convolutional Networksfor Multiple Sclerosis. In
ISMRM-ESMRMB 2018 - Joint Annual Meeting , pages 1–6, Paris,France, June 2018.[32] Philip M White, Evelyn M Teasdale, Joanna M Wardlaw, and Valerie Easton. Intracranialaneurysms: Ct angiography and mr angiography for detection—prospective blinded comparisonin a large patient cohort.
Radiology , 219(3):739–749, 2001.[33] Jelmer M Wolterink, Anna M Dinkla, Mark HF Savenije, Peter R Seevinck, Cornelis AT van denBerg, and Ivana Išgum. Deep mr to ct synthesis using unpaired data. In
International Workshopon Simulation and Synthesis in Medical Imaging , pages 14–23. Springer, 2017.[34] Ruifang Yan, Bo Zhang, Long Wang, Qiang Li, Fengmei Zhou, Jipeng Ren, Zhansheng Zhai,Zheng Li, and Hongkai Cui. A comparison of contrast-free mra at 3.0 t in cases of intracranialaneurysms with or without subarachnoid hemorrhage.
Clinical imaging , 49:131–135, 2018.[35] Guang Yang, Simiao Yu, Hao Dong, Greg Slabaugh, Pier Luigi Dragotti, Xujiong Ye, FangdeLiu, Simon Arridge, Jennifer Keegan, Yike Guo, et al. Dagan: Deep de-aliasing generativeadversarial networks for fast compressed sensing mri reconstruction.
IEEE Transactions onMedical Imaging , 2017.[36] Habib Zaidi, Marie-Louise Montandon, and Daniel O Slosman. Magnetic resonance imaging-guided attenuation and scatter corrections in three-dimensional brain positron emission tomog-raphy.
Medical physics , 30(5):937–948, 2003.[37] Matthew D Zeiler, Dilip Krishnan, Graham W Taylor, and Rob Fergus. Deconvolutionalnetworks. In
Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on ,pages 2528–2535. IEEE, 2010.[38] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-imagetranslation using cycle-consistent adversarial networks. arXiv preprint arXiv:1703.10593arXiv preprint arXiv:1703.10593