Deep Slice Interpolation via Marginal Super-Resolution, Fusion and Refinement
Cheng Peng, Wei-An Lin, Haofu Liao, Rama Chellappa, S. Kevin Zhou
DDeep Slice Interpolation via MarginalSuper-Resolution, Fusion and Refinement
Cheng Peng , Wei-An Lin , Haofu Liao , Rama Chellappa , S. Kevin Zhou , The University of Maryland, College Park University of Rochester ChineseAcademy of Sciences Peng Cheng Laboratory, Shenzhen
Abstract.
We propose a marginal super-resolution (MSR) approachbased on 2D convolutional neural networks (CNNs) for interpolatingan anisotropic brain magnetic resonance scan along the highly under-sampled direction, which is assumed to axial without loss of generality.Previous methods for slice interpolation only consider data from pairsof adjacent 2D slices. The possibility of fusing information from the di-rection orthogonal to the 2D slices remains unexplored. Our approachperforms MSR in both sagittal and coronal directions, which provides aninitial estimate for slice interpolation. The interpolated slices are thenfused and refined in the axial direction for improved consistency. SinceMSR consists of only 2D operations, it is more feasible in terms of GPUmemory consumption and requires fewer training samples compared to3D CNNs. Our experiments demonstrate that the proposed method out-performs traditional linear interpolation and baseline 2D/3D CNN-basedapproaches. We conclude by showcasing the method’s practical utility inestimating brain volumes from under-sampled brain MR scans throughsemantic segmentation.
Magnetic resonance imaging (MRI) has been one of prevailing gold standards fordiagnostic purposes. It is not only non-invasive, but also better at targeting dif-ferent human tissues with specific contrasts that reveal the underlying anatomy.The main disadvantage of MRI compared to other medical imaging modalities(e.g. computed tomography, or CT) is its long acquisition time, which is governedby the duration of the frequency signals to be emitted by atoms and sampledby the machine. There has been a long history of studies on accelerating theMRI sampling process [1,2,3,4] by undersampling in the 2D k-space during ac-quisition; however, only a relatively small number of studies [5,6,7,8] focused oninterpolating between the sampled slices.In practice, most MR volumes are taken anisotropically with a high resolutionwithin slices and a sparse resolution between slices. For example, Fig. 1 showsa brain MR scan whose axial direction is sparsely sampled. As a result, imagequality suffers when viewing from coronal and sagittal directions.It is desirable to have a consistent resolution across all dimensions, both forvisualization and for medical analysis tasks such as brain volume estimation. a r X i v : . [ ee ss . I V ] A ug Authors Suppressed Due to Excessive Length
Fig. 1: The axial, coronal, and sagittal views of an anisotropic MR volume arefitted to isotropic resolution through (Left) linear interpolation and (Right) ourproposed slice-interpolation method.Traditionally, slice interpolation has been done with two groups of methods:intensity-based and deformation-based methods. Linear and cubic spline interpo-lation methods are classic examples of intensity-based methods that directly per-form the interpolation based on the intensity of the adjacent slices. Deformation-based methods estimate deformation fields between adjacent slices, and then in-terpolate in-between pixels based on the estimated fields. However, these meth-ods require that adjacent MR slices contain similar anatomical structures. Thatis, the structural change must be sufficiently small so that a dense pixel cor-respondence can be established between adjacent slices. When the anatomicalvariation between slices is significant, more sophisticated modeling approach isneeded.Recently, deep convolutional neural networks (DCNNs) have been outper-forming traditional approaches on medical image analysis due to their ability tomodel complex variations within data [4,9,10]. For slice interpolation, DCNNscan be applied to learn a mapping from an anisotropic MR to isotropic. How-ever, directly addressing the task in 3D is challenging due to the high memoryconsumption of 3D networks. In this work, we break down the task of 3D sliceinterpolation into a sequence of 2D problems to produce anatomically consis-tent slice interpolations while being memory-feasible. Specifically, we propose anovel marginal super-resolution to super-resolve isotropic views in the sagittaland coronal directions by a 2D CNN. The interpolation along the axial directioncan be estimated by a fusion of the isotropic saggital and coronal views. Finally,the interpolated slices are processed to recover more details via refinement .Our main contributions can be summarized as follows: – We propose a novel marginal super-resolution approach to break down the3D slice interpolation problem into several 2D problems, which is more feasi-ble in terms of GPU memory consumption and the amount of data availablefor training. – We propose a two-view fusion approach to incorporate the 3D anatomicalstructure. The interpolated slices after fusion achieve high structural consis-tency. The final refinement further recovers fine details. – We perform extensive evaluations on a large-scale MR dataset, and show thatthe proposed method outperforms all the competing CNN models, including itle Suppressed Due to Excessive Length 3
3D CNNs, in terms of quantitative measurement, visual quality, and brainmatter segmentation.
Traditional slice interpolation methods.
Early work on interpolating vol-umetric medical data dates back to 1992, when Goshtasby et al. [5] proposed toleverage the small and gradual anatomic differences between consecutive slices,and find correspondence between pixels by searching through small neighbor-hoods. A slew of methods were proposed in the subsequent years, focusing onfinding more accurate deformation fields, including shape-based methods [6],morphology-based methods [7], registration-based methods [8], etc. Linear in-terpolation can be regarded as a special example, which essentially assumes nodeformation between slices.An important assumption made in the above-mentioned methods is thatadjacent slices contain similar anatomical structures, i.e., the changes in thestructures have to be sufficiently small such that a dense correspondence can befound between two slices. This assumption largely limits the applicability of sliceinterpolation methods especially when slices are sparsely sampled. Furthermore,these methods did not utilize the information outside the two adjacent slices.
Learning based super-resolution methods.
Slice interpolation can be viewedas a special case of 3D super-resolution. Here we review the literatures of 2D Sin-gle Image Super-Resolution (SISR), especially those approaches based on CNNs.Dong et al. [11] first proposed SRCNN, which learns a mapping that optimallytransforms low-resolution (LR) images to high-resolution (HR) images. Manysubsequent studies explored strategies to improve SISR such as using deeperarchtectures and weight-sharing [12,13,14]. However, these methods require bi-linear upsampling as a pre-processing step, which drastically increases computa-tional complexity [15]. To address this issue, Dong et al. [15] proposed to applydeconvolution layers for LR image to be directly upsampled to finer resolution.Furthermore, many studies have shown that residual learning provided betterperformance in SISR [16,17,18]. Specifically, Zhang et al. [18] incorporated bothresidual learning and dense blocks [19], and introduced Residual Dense Blocks(RDB) to allow for all layers of features to be seen directly by other layers,achieving state-of-the-art performance.Generative Adversarial Networks (GAN) [20] have also been incorporated inSISR to improve the visual quality of the generated images. Ledig et al. pointedout that training SISR networks solely by L or L loss intrinsically leads toblurry estimations, and proposed SRGAN [17], which generated much sharperand realistic images compared to other approaches, despite having lower peaksignal to noise ratios.Though available computation capacity has been increasing, 3D CNNs arestill limited by memory capacity due to a considerable increase in the size ofnetwork parameters and input data. A common compromise is to extract small Authors Suppressed Due to Excessive Length patches from 3D volume to reduce the input size [21]; however, this also limitsthe effective receptive field of the network. In practice, 3D CNNs are also limitedby the amount of training data to ensure generalization.
Let I ( x, y, z ) ∈ R N × N × N denote an isotropic MR volume. By convention, werefer the x axis as the “sagittal” axis, the y axis as the “coronal” axis, and the z axis as the “axial” axis. Accordingly, there are three types of slices: – the sagittal slice for a given x : I x ( y, z ) = I ( x, y, z ) , ∀ x ; – the coronal slice for a given y : I y ( x, z ) = I ( x, y, z ) , ∀ y ; – the axial slice for a given z : I z ( x, y ) = I ( x, y, z ) , ∀ z .We also define a slab of s slices, say along the x axis, as I x,s = (cid:26) I x + l ( y, z ) (cid:12)(cid:12)(cid:12)(cid:12) l = − s − , . . . , , . . . , s − (cid:27) . (1) I y,s and I z,s are defined similarly. Without loss of generality, in this work we con-sider slice interpolation along the axial axis. From I ( x, y, z ), the correspondinganisotropic MR volume is defined as I ↓ k ( x, y, z ) = I ( x, y, k · z ) , (2)where k is the sparsity factor. The goal of slice interpolation is to find a trans-formation T : R N × N × Nk → R N × N × N that can optimally transform I ↓ k ( x, y, z )back to I ( x, y, z ).There are two possible baseline realizations of T using CNNs: – 2D CNN. More in line with the traditional methods, a 2D CNN takes twoadjacent slices I z ↓ k ( x, y ) and I z +1 ↓ k ( x, y ) as inputs, and directly estimates thein-between missing slices. One major drawback of this approach is that asimple 2D CNN has limited capabilities of modeling the variations in highlyanisotropic volumes. – 3D CNN. A 3D CNN is learned as a mapping from the sparsely sampledvolume I ↓ k ( x, y, z ) to a fully sampled volume I ( x, y, z ). This straightforwardapproach, however, suffers from training memory issue and insufficient train-ing data.Below, we present our proposed algorithm that retains the advantages of thebaseline CNN models discussed above while mitigating their disadvantages. We propose to break down the 3D slice interpolation problem into a series of2D tasks, and interpolate the contextual information from all three anatomicalviews to achieve structurally consistent reconstruction and improved memoryefficiency. The two stages are as follows: itle Suppressed Due to Excessive Length 5 – Marginal super-resolution (MSR), where we provide high-quality estimatesof the interpolated slices by extrapolating context from sagittal and coronalaxes. – Two-view Fusion and Refinement (TFR), where we fuse the estimations andfurther refine with information from the axial axis.Fig. 2: Marginal Super-Resolution Pipeline.
Fig. 2 demonstrates the pipeline of MSR. Given I ↓ k ( x, y, z ), we view it as asequence of 2D sagittal slices I x ↓ k ( y, z ) marginally from the sagittal axis. Thesame volume can also be treated as I y ↓ k ( x, z ) from the coronal axes. We make anobservation that super-resolving I x ↓ k ( y, z ) to I x ( y, z ) and I y ↓ k ( x, z ) to I y ( x, z ) areequivalent to applying a sequence of 2D super-resolution along the x axis and y axis, respectively. Therefore, we apply a residual dense network (RDN) [18] M θ to upsample I x ↓ k ( y, z ) and I y ↓ k ( x, z ) as follows: I xsag ( y, z ) = M θ ( I x,s ↓ k ( y, z )) , I ycor ( x, z ) = M θ ( I y,s ↓ k ( x, z )) . (3)Notice that instead of super-resolving 2D slices independently, we propose totake a slab of s slices as input and estimate a single SR output. Using a larger s allows more context to be modelled. The MSR process is repeated for all x and y .Finally, the super-resolved slices can be reformatted as sagittally and coronallysuper-resolved volumes, I sag ( x, y, z ) and I cor ( x, y, z ), respectively. We apply thefollowing L loss to train the RDN: L MSR = (cid:107)M θ ( I x,s ↓ k ) − I xgt (cid:107) + (cid:107)M θ ( I y,s ↓ k ) − I ygt (cid:107) , (4)where I xgt = I x ( y, z ) and I ygt = I y ( x, z ) in the isotropic MR volume.From the axial perspective, I sag ( x, y, z ) and I cor ( x, y, z ) provide line-by-lineestimations for the missing axial slices. However, since no constraint is enforcedon the estimated axial slices, inconsistent interpolations lead to noticeable arti-facts (See Section 5.4). We resolve this problem in the second TFR stage of theproposed pipeline. Authors Suppressed Due to Excessive Length
Fig. 3: Two-view Fusion Pipeline.The TFR stage is the counterpart of MSR which further improves the qual-ity of slice interpolation by learning the structural variations along the axialdirection.As shown in Fig. 3, we first resample the sagitally and coronally super-resovled volumes I sag ( x, y, z ) and I cor ( x, y, z ) from the axial direction to obtain I zsag ( x, y ) and I zcor ( x, y ), respectively. A fusion network F φ takes the two slicesas inputs and combines information from the two views. The objective functionfor training the fusion network is: L fuse = (cid:107) I zfuse ( x, y ) − I zgt (cid:107) , (5)where I zfuse ( x, y ) = F φ ( I zsag , I zcor ) is the output of the fusion network, and I zgt = I z ( x, y ) in the isotropic MR volume. After training, the fusion network is appliedto all the interpolated slices { I zsag | ( z mod k ) (cid:54) = 0 } and { I zcor | ( z mod k ) (cid:54) = 0 } ,yielding an MR volume I fuse ( x, y, z ).Fig. 4: Refinement Pipeline.After fusion, the interpolated slices already have visually pleasing qualities.Finally, to improve between-slice consistency along the axial axis, a refinementnetwork R ψ takes a slab of k + 1 slices I z,k +1 fuse as input and generates a consistentoutput slab I z,k +1 refine . The size is selected as k + 1 to make sure the refinement net-work has information from one or two observed slices. The pipeline is illustratedin Fig. 4. The loss function is given by: L refine = (cid:107) I z,k +1 refine − I z,k +1 gt (cid:107) . (6) itle Suppressed Due to Excessive Length 7 A 2D CNN estimates missing slices solely based on adjacent MR scans. In con-trast, the proposed MSR and TFR take into account the full context from sagit-tal, coronal, and axial views, thus providing strong estimates of the in-betweenslices. A 3D CNN directly maps a sparsely sampled MR volume to a fully sam-pled MR volume. Due to memory limitation, a volume often needs to be dividedinto small patches during training, which limits the effective receptive field of3D CNNs. In the proposed method, interpolation in 3D space is treated as a se-quence of 2D operations, which ensures that the networks can be trained withoutrelying on patches, thus allowing full contextual information to be captured. Fur-thermore, there are sufficient samples to train 2D CNNs, which mitigates theproblem of overfitting issue that plagues 3D CNNs.
We implement the proposed framework using Py-Torch . The RDN [18] architecture with two RDBs are used as the building unitfor our networks. For Fusion, Refinement, and baseline 2D CNN models, wherethe inputs and outputs have the same image size, we replace the upsamplingnetwork in RDN with one convolutional layer. The input to the MSR networkhas s = 3. Note that due to memory constraint, 3D CNN only uses one RDB.We train the models with Adam optimization, with a momentum of 0.5 and alearning rate of 0.0001, until they reach convergence. Dataset
We employ 120 T1 MR brain scans from the publicly available Alzheimer’sDisease Neuroimaging Initiative (ADNI) dataset. The MR scans are isotropicallysampled at 1 mm × × × ×
256 pixels,ending up with 30,720 slices in each of sagittal, coronal, and axial directions.We further down-sample the isotropic volumes by factors of k = 4 and k = 8,yielding I ↓ k ( x, y, z ) of sizes 256 × ×
64 and 256 × ×
32, respectively. Thedata is split into training/validation/testing sets with 95/5/20 samples. Notethat during test time, we only select slices that contain mostly brain tissues, thenumber of samples for each sparsity are presented in Table 1.
Evaluation metrics
We compare different slice interpolation approaches us-ing two types of quantitative metrics. First, we use Peak Signal-to-Noise Ra-tio (PSNR) and Structured Similarity Index (SSIM) to measure low-level im-age quality. Second, we evaluate the quality of the interpolated slices throughgray/white-matter segmentation. The segmentation network has a U-Net archi-tecture, which is one of the winning models in MRBrainS challenge [22], and https://pytorch.org Authors Suppressed Due to Excessive LengthSparsity Method PSNR(dB) SSIM DICE HD(90th pct.)GM/WM GM/WM4 LI 26.39 0.8317 0.7716/0.7296 3.607/7.9652D CNN 31.24 0.9313 0.8813/0.8334 3.176/12.363D CNN 31.34 0.9292 0.8536/0.8265 2.898/7.373Ours / / / / Table 1: Quantitative evaluations for different slice interpolation approaches.For DICE and HD performance metrics, we present results on gray matter(GM)/white matter (WM) segmentation. The best results are in bold and thesecond best underlined.is trained on the OASIS dataset [23]. Dice Coefficient (DICE) and HausdorffDistance (HD) between the segmentation maps of ground truth slices and gen-erated slices are calculated. Due to the memory limitation of 3D CNN, we can atmost super-resolve a limited region of 144 × ×
256 pixels during evaluation.For fair comparisons, the evaluation metrics are calculated over the same regionacross all methods.
In this section, we evaluate the performance of our method and the baselineapproaches. Quantitative comparisons are presented in Table 1. We can observethat all the three CNN based methods have higher PSNR and SSIM than thewidely used linear interpolation. 3D CNN slightly outperforms 2D CNN in 4xsparsity, but performs worse in 8x sparsity. Among the three CNN methods, ourmethod consistently outperforms 2D CNN and 3D CNN baselines.The performance gain in accurately segmenting gray and white matters islarge from linear interpolation to baseline CNN-based methods. However, at 8xsparsity, the HD scores of linear interpolation are comparable with 2D CNN and3D CNN, while our method outperforms these approaches by at least 10%. Thisdemonstrates the robustness of our method even at very high sparsity.
In Fig. 4, we present the observed slices I z ↓ k and I z +1 ↓ k along with the interpolatedslices produced by different methods. Specifically we demonstrate the second ofthree interpolated MR slices for 4x sparsity, and the third of seven interpolatedslices for 8x sparsity. We highlight the region where the anatomical structuressignificantly change compared to the observed slices I z ↓ k and I z +1 ↓ k . We observethat although 2D CNN has comparable performance in terms of PSNR and To reduce the effect of outliers, HD is calculated on the 90th percentile displacement.itle Suppressed Due to Excessive Length 9Sparsity I z ↓ k LI 2D CNN 3D CNN Ours GT I z +1 ↓ k / PSNR(dB)/SSIM / PSNR(dB)/SSIM
Fig. 4: Visual comparisons of slice interpolation approaches. For 4x sparsity, thesecond of three interpolated MR slices is presented. For 8x sparsity, the third ofseven interpolated slices is presented.SSIM, it tends to produce false anatomical structures in the zoomed regions.3D CNN is able to resolve more accurate details. However, the improvement isquite limited, which we attribute to the fact that 3D CNN requires more trainingMR volumes in order to generalize and has smaller receptive field due to patch-based training. Our method benefits from the large receptive field of 2D CNNand two-view fusion, which not only produces sharper images, but also correctlyestimates brain anatomy. The sharp and accurate estimation is crucial in clinicalapplications such as diagnosing Alzheimer’s Disease by brain volume estimation.In Fig. 4, we demonstrate the advantage of the proposed method in brainmatter segmentation. It is clear that although 2D and 3D CNN generates visu-ally plausible interpolation as presented in Fig. 4, the brain matters are easilymisclassified due to incorrect anatomical structures and blurred details.
In this section, based on 4x sparsity, we evaluate the effectiveness of each pro-posed components. The following settings are considered: – MSR nsag : Slice interpolation based on only sagittal view MSR. We considernumber of input slices n = 1 , – MSR ncor : Slice interpolation based on only coronal view MSR. We considernumber of input slices n = 1 , – Fused: Slice interpolation with fusion network. Inputs to the network areMSR sag and MSR cor . / GM/WM0.6808/0.7161 0.8103/0.8631 0.7950/0.8606 / GM/WM / GM/WM0.5910/0.6947 0.6516/0.8021 0.6507/0.8186 / GM/WM
Fig. 4: Visual comparison of gray matter (Green)/white matter (Blue) segmenta-tion over different methods, with respective DICE scores listed under the images.
Stage PSNR (dB) SSIMbaseline 2D CNN 31.24 0.9313baseline 3D CNN 31.34 0.9292MSR sag cor sag cor Table 2: Quantitative ablation study. Baseline numbers are also included forcomparison. The best results are in bold and the second best underlined. – Refined: The proposed full pipeline.From Table 2, it is clear that each proposed component improves the qualityof slice interpolation. Notice that even without fusion and refinement, the axialslices interpolated by MSR sag and MSR cor are already better than the baseline2D/3D CNNs.Visual comparisons are shown in Fig. 4, where we select a challenging slicewith abundant anatomical details. From Fig. 4, it is clear that marginally super-resolving axial slices from coronal and sagittal views leads to noticeable horizon-tal (MSR nsag ) and vertical (MSR ncor ) artifacts. Furthermore, some small detailsare better resolved by MSR sag , while others are better resolved by MSR cor . Thefusion network combines the features from MSR sag and MSR cor , which effec- itle Suppressed Due to Excessive Length 11GT MSR sag MSR sag FusedGT (zoomed) MSR cor MSR cor Refined
Fig. 4: Visual comparison for the proposed components.tively reduces inconsistency. With the additional axial information, the fusedslice is then further improved by the refinement network.In addition to L loss, we also experiment on GAN loss at refinement stage.However, we find that GAN tends to generate fake anatomical details, which isundesirable in medical applications. In this work, we proposed a multi-stage 2D CNN framework called deep slice in-terpolation. This framework allows us to recover missing slices with high quality,even when the distance between observed slices are sparsely sampled. We eval-uated our approach on a large ADNI dataset, demonstrating that our methodoutperforms possible 2D/3D CNN baselines both visually and quantitatively.Furthermore, we have illustrated that the MR slices estimated by the proposedmethod have superior segmentation accuracy. In the future, we plan to investi-gate the potential application of the proposed framework on real screening MRIwhich often have a very low slice density.
References
1. Ravishankar, S., Bresler, Y.: MR image reconstruction from highly undersampledk-space data by dictionary learning. IEEE Trans. Med. Imaging (5) (2011)1028–10412. Lustig, M., Donoho, D., Pauly, J.M.: Sparse mri: The application of compressedsensing for rapid mr imaging. Magnetic Resonance in Medicine: An Official Journalof the International Society for Magnetic Resonance in Medicine (6) (2007) 1182–11952 Authors Suppressed Due to Excessive Length3. Ma, S., Yin, W., Zhang, Y., Chakraborty, A.: An efficient algorithm for compressedMR imaging using total variation and wavelets. In: 2008 IEEE Computer SocietyConference on Computer Vision and Pattern Recognition (CVPR 2008), 24-26June 2008, Anchorage, Alaska, USA. (2008)4. Schlemper, J., Caballero, J., Hajnal, J.V., Price, A.N., Rueckert, D.: A deep cas-cade of convolutional neural networks for dynamic MR image reconstruction. IEEETrans. Med. Imaging (2) (2018) 491–5035. Goshtasby, A.A., Turner, D.A., Ackerman, L.V.: Matching of tomographic slicesfor interpolation. IEEE Trans. Med. Imaging (4) (1992) 507–5166. Grevera, G.J., Udupa, J.K.: Shape-based interpolation of multidimensional grey-level images. IEEE Trans. Med. Imaging (6) (1996) 881–8927. Lee, T., Wang, W.: Morphology-based three-dimensional interpolation. IEEETrans. Med. Imaging (7) (2000) 711–7218. Penney, G.P., Schnabel, J.A., Rueckert, D., Viergever, M.A., Niessen, W.J.:Registration-based interpolation. IEEE Trans. Med. Imaging (7) (2004) 922–9269. Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomed-ical image segmentation. In: Medical Image Computing and Computer-AssistedIntervention - MICCAI 2015 - 18th International Conference Munich, Germany,October 5 - 9, 2015, Proceedings, Part III. (2015) 234–24110. Liu, S., Xu, D., Zhou, S.K., Mertelmeier, T., Wicklein, J., Jerebko, A.K., Grbic,S., Pauly, O., Cai, W., Comaniciu, D.: 3d anisotropic hybrid network: Trans-ferring convolutional features from 2d images to 3d anisotropic volumes. CoRR abs/1711.08580 (2017)11. Dong, C., Loy, C.C., He, K., Tang, X.: Image super-resolution using deep convo-lutional networks. CoRR abs/1501.00092 (2015)12. Kim, J., Lee, J.K., Lee, K.M.: Accurate image super-resolution using very deepconvolutional networks. CoRR abs/1511.04587 (2015)13. Zhang, K., Zuo, W., Gu, S., Zhang, L.: Learning deep CNN denoiser prior forimage restoration. In: 2017 IEEE Conference on Computer Vision and PatternRecognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017. (2017) 2808–281714. Kim, J., Lee, J.K., Lee, K.M.: Deeply-recursive convolutional network for imagesuper-resolution. In: 2016 IEEE Conference on Computer Vision and PatternRecognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016. (2016) 1637–164515. Dong, C., Loy, C.C., Tang, X.: Accelerating the super-resolution convolutionalneural network. CoRR abs/1608.00367 (2016)16. Lim, B., Son, S., Kim, H., Nah, S., Lee, K.M.: Enhanced deep residual networksfor single image super-resolution. CoRR abs/1707.02921 (2017)17. Ledig, C., Theis, L., Huszar, F., Caballero, J., Cunningham, A., Acosta, A., Aitken,A.P., Tejani, A., Totz, J., Wang, Z., Shi, W.: Photo-realistic single image super-resolution using a generative adversarial network. In: 2017 IEEE Conference onComputer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July21-26, 2017. (2017) 105–11418. Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network forimage super-resolution. CoRR abs/1802.08797 (2018)19. Huang, G., Liu, Z., Weinberger, K.Q.: Densely connected convolutional networks.CoRR abs/1608.06993 (2016)20. Goodfellow, I.J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair,S., Courville, A.C., Bengio, Y.: Generative adversarial networks. CoRR abs/1406.2661 (2014)itle Suppressed Due to Excessive Length 1321. Chen, Y., Shi, F., Christodoulou, A.G., Zhou, Z., Xie, Y., Li, D.: Efficient andaccurate MRI super-resolution using a generative adversarial network and 3d multi-level densely connected network. CoRR abs/1803.01417 (2018)22. Akkus, Z., Galimzianova, A., Hoogi, A., Rubin, D.L., Erickson, B.J.: Deep learningfor brain MRI segmentation: State of the art and future directions. J. DigitalImaging (4) (2017) 449–45923. Marcus, D.S., Wang, T.H., Parker, J., Csernansky, J.G., Morris, J.C., Buckner,R.L.: Open access series of imaging studies (OASIS): cross-sectional MRI datain young, middle aged, nondemented, and demented older adults. J. CognitiveNeuroscience19