GRAPPA-GANs for Parallel MRI Reconstruction
Nader Tavaf, Amirsina Torfi, Kamil Ugurbil, Pierre-Francois Van de Moortele
TTAVAF et al. : GRAPPA-GANS FOR PARALLEL MR IMAGE RECONSTRUCTION 1
GRAPPA-GANs for Parallel MRI Reconstruction
Nader Tavaf, Amirsina Torfi, Kamil Ugurbil, Pierre-Franc¸ois Van de Moortele
Abstract — k -space undersampling is a standard technique toaccelerate MR image acquisitions. Reconstruction techniques in-cluding GeneRalized Autocalibrating Partial Parallel Acquisition(GRAPPA) and its variants are utilized extensively in clinical andresearch settings. A reconstruction model combining GRAPPAwith a conditional generative adversarial network (GAN) wasdeveloped and tested on multi-coil human brain images fromthe fastMRI dataset. For various acceleration rates, GAN andGRAPPA reconstructions were compared in terms of peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). For anacceleration rate of R=4, PSNR improved from 33.88 usingregularized GRAPPA to 37.65 using GAN. GAN consistentlyoutperformed GRAPPA for various acceleration rates. Index Terms —MRI, reconstruction, GAN, generative, adver-sarial, GRAPPA, accelerated, parallel, medical imaging.
I. I
NTRODUCTION M AGNETIC Resonance Image (MRI) is a prevalentnon-invasive medical imaging technique with variousclinical and research applications. A major advantage of MRIis its potentially high resolution; however, MRI generallyrequires lengthy acquisition times to achieve high resolutionimages. Undersampling the MR signal (obtained in frequencydomain a.k.a. k -space) is a method to accelerate such time-consuming acquisitions. Parallel imaging refers to the methodsused for reconstructing MR images from undersampled k -space signal. Generally, parallel image reconstruction tech-niques take advantage of the additional encoding informationobtained using (ideally independent) elements of a receiverarray and/or mathematical properties of the frequency domainsignal to compensate for the loss of information due to theundersampling. Nevertheless, consequences of that informa-tion loss generally detract from the quality of the imagesreconstructed from undersampled k -space.The aim of improving the undersampled reconstructions canbe pursued from multiple different angles. While an extensivereview of all such research efforts is beyond the scope of thisarticle, we still mention a few relevant works in each line ofresearch to provide context for the current paper. In terms ofhardware, there has been significant effort in the MR researchcommunity to improve the sensors used to acquire the signal(radio-frequency coils) to reduce noise and noise correlationbetween different channels or to take advantage of additionalreceive channels (e.g. [1]–[4]). There has been a wider varietyof advancements in the post-processing front. SENSE [5] andGRAPPA [6] are two of the primary methods for parallelMR image reconstruction. GRAPPA tries to estimate themissing k -space signal but it inherently suffers from noise-amplification. Generally, the k -space undersampling comes atthe expense of aliasing in reconstruction. Several variations Center for Magnetic Resonance Research (CMRR), University of MinnesotaTwin Cities, Minneapolis, MN, 55455 USA. E-mail: [email protected]. and extensions to SENSE and GRAPPA have been proposedwhich primarily rely on regularization to suppress noise-amplification. Compressed-sensing also relies on non-linearoptimization of randomly undersampled k -space data, assum-ing the data is compressible [7]. Compressed sensing MRIgenerally utilizes total variation, wavelet/cosine transforms, ordictionary learning as sparse representations of the naturallycompressible MR images.More recently, side effects of existing techniques (noise am-plification, staircase artifacts of total variation, block artifactsof wavelets, relatively long reconstruction time of iterative op-timization techniques, etc) and the advent of public MR imagedatasets have encouraged researchers to look into deep learn-ing techniques which have often outperformed conventionalregularization and/or optimization-based techniques in variousapplications, including variants of the undersampled imagereconstruction problem (e.g. [8], [9]). Among the promisingliterature, several works have used generative adversarial net-works (GANs) [10], [11] to reconstruct undersampled images.Yang et al. [12] proposed a GAN to address the aliasingartifact resulting from the sub-Nyquist sampling rate. Theirproposed architecture used a pretrained network to extractan abstract feature representation from the reconstructionand enforce consistency with the target in that feature level.Murugesan et al. [13] and Emami et al. [14] used contextdependent/attention-guided GAN which has a feedback loopback to the generator input providing information focusing onlocal deviations from tissue. Mardani et al. [15] and Deora etal. [16] used residual skip connections inside each convolu-tional block of their generator. It is noteworthy that Mardanisuggests the discriminator outputs can be used to focus on sen-sitive anatomies. Dar et al. [17] also used perceptual priors intheir multi-contrast reconstruction GAN. The above mentionedstudies using GANs have demonstrated enhanced performancecompared to state of the art compressed sensing and otherparallel imaging reconstruction techniques. However, one ofthe primary critiques of GAN-based reconstruction is thesuggestion that GANs are prone to hallucination (see forexample [15]).Here, we propose a novel method for reconstruction of un-dersampled/accelerated MRI images that combines GRAPPAand GAN to further improve the reconstruction quality bybuilding on our proof-of-principle demonstration [18]. Ourprimary contributions include: • we propose a combination of GRAPPA and GAN, • in addition to the adversarial losses, we include data-consistency and perceptual feature level loss for artifactremoval. a r X i v : . [ ee ss . I V ] F e b AVAF et al. : GRAPPA-GANS FOR PARALLEL MR IMAGE RECONSTRUCTION 2
Fig. 1. Equidistant k -space undersampling with random position of thefirst k -space line, keeping the central k -space fully-sampled (ACS lines usedfor GRAPPA). From left to right: fully sampled, subsampled with R=4,subsampled with R=8. II. M
ETHODS
A. Undersampling scheme
The original data is fully sampled in k -space, allowingfor comparison of undersampled reconstructions with a fully-sampled ground truth reconstruction. Various undersamplingschemes have been used in the literature, with uniform randomsubsampling, equidistant random subsampling, and Gaussianrandom subsampling being the primary schemes. Given thatour dataset (discussed in more detail shortly) is composed of2D axial slices, our analysis uses only 1D subsampling alongthe phase encoding direction. Here, we have used equidistantrandom subsampling while maintaining a fraction of the k -space lines at the center of the k -space fully-sampled, as iscustomary in the MRI literature and required for GRAPPAreconstruction. Equidistant random undersampling means thatwhile the k -space is subsampled equidistantly, the location ofthe first k -space line is selected at random. For an accelerationrate (or subsampling ratio) of R=4, 8% of k -space lines werepreserved at the center and for R=8, 4% of the k -spacelines were preserved at the center. Figure 1 demonstrates thesubsampling scheme in k -space. B. Reconstruction method
Details of GRAPPA implementations have been included invarious publications [6]. Briefly, GRAPPA uses linear shift-invariant convolutions in k-space. Convolutional kernels werelearned from a fully sampled subset at the center of k-space (auto-calibration signal or ACS lines) constrained bya Tikhonov regularization term and then used to interpolateskipped k-space lines using multi-channel (receive array) rawdata. We did a GRAPPA root-sum-of-squares reconstructionof the undersampled, multi channel image prior to feedingit to the GAN. In a generic GAN, a generator network ( G : m → ˆ m ) competes with a discriminator ( D : ˆ m → (0 , ) ina min-max optimization problem, min θ G max θ D L ( θ D , θ G ) = E[log D ( ˆ m )] + E[log(1 − D ( G ( m ))] , The term “channel” in the MRI context refers to the number of sensorsor coils used in image acquisition, whereas in the deep learning context, it isused interchangeably with number of kernels or filters. where the generator learns the mapping from the GRAPPA re-construction of the undersampled image, m , to its prediction, ˆ m , of the target, fully sampled image, m . Note that the GANis learning in image domain (not the frequency domain).In essence, first, regularized GRAPPA is used to fill-in themissing k -space lines. Then, 2D discrete fast Fourier transformis performed to reconstruct individual images of individualcoils. A root-sum-of-squares (RSS) reconstruction, m , of theindividual magnitude images from individual coils is thenused as the input to the generator. The generator learns topredict the ground-truth given this subsampled reconstructionwhile the discriminator learns to classify / distinguish betweengenerator-reconstructed images and ground-truth images.The GAN was composed of a generator (a UNET [19])and a discriminator (a convolutional neural network used as abinary classifier). The network architecture is depicted sym-bolically in Figure 2. The UNET consisted of an encoder anda decoder. The encoder was composed of blocks of batch nor-malization [20], 2D convolution, and leakyReLu, interleavedby max pooling to down-sample the images. Each one of theseblocks had three convolutional/activation layers with in-block(resnet type) skip connections passing the information derivedat earlier layers to the features computed at later layers. Thedecoder was composed of similar normalization, convolution,leakyReLu blocks interleaved by transpose 2D convolutionsfor up-sampling. Skip connections were used to add high-level feature representations of the encoding path to elementsof the decoding path. The original implementation in [19]learns a prediction of the image, however, we included a skipconnection from the input of the encoder to be added to theoutput of the decoder, so that the UNET is learning the residual(difference). Residual learning (compared to learning the fullreconstruction task) proved to be a less challenging task,requiring less model complexity. Furthermore, the addition ofthe in-block skips noticeably improved performance results.Depth of the UNET was five levels, with the top level limitedto 64 kernels at most (due to hardware limitations) and 3x3convolutional kernels.The discriminator was topped with a dense layer andsigmoid activation appropriate for the binary classification ofimages (classifying generator reconstructions versus groundtruth) using binary cross entropy loss. In addition to thetypical generator GAN loss (binary cross entropy of thediscriminator judgment of generator output compared withones, or − log[ D ( ˆ m )] ), the generator loss was conditionedon a weighted sum of L1 and L2 loss terms comparinggenerator output with target reconstruction, a data-consistencyloss term comparing the output and ground truth in spatialfrequency domain (k-space), and an inception loss, comparingthe InceptionV3 [21] feature representation of generator outputand ground truth. Overall, this results in, L ( θ G ) = log( D ( ˆ m )) + λ L ( ˆ m, m ) + λ L ( ˆ m, m ) + λ DC L ( F ( ˆ m ) , F ( m )) + λ f L ( I ( ˆ m ) , I ( m )) where F is the Fourier transform that maps the images tofrequency domain, and I is the Inception network used toextract features. Note that the Inception network was pre-trained on ImageNet [22] and locked (no weight updates)during training. In other words, the InceptionV3 network was AVAF et al. : GRAPPA-GANS FOR PARALLEL MR IMAGE RECONSTRUCTION 3
Fig. 2. Symbolic network architecture. The UNET consisted of five levels, starting with 64 channels at the first layer. Kernel size used for 2D convolutionswas 3x3 (in both R=4 and R=8 experiments, due to computational limitations). The InceptionV3 network was pretrained on ImageNet and used to extractand compare features from the generator output and target image. Each convolution block of the UNET consisted of three layers of convolution, batchnormalization, leakyRelu interleaved with resnet-type skip connections. used only to calculate a perceptual loss [23], that is used toevaluate the performance of the generator (or to accentuatefeature level irregularities of generator reconstruction), notas part of the generator’s architecture, and need not be usedin deployment. In the absence of the Inception feature loss,the L1-L2 loss would focus on pixel level similarity, whichis useful in improving the performance metrics (discussedshortly), but leaves noticeable residual aliasing artifacts in thereconstruction. The focus on feature loss (at later epochs oftraining) helped resolve these residual aliasing artifacts. Theaddition of the frequency domain data consistency loss helpedcapture the higher spatial frequency details of the anatomy.
C. Dataset
The data used in this work were obtained from the NYUfastMRI Initiative database, with detailed descriptions of thedatasets published previously in [24], [25]. In the presentstudy, we used multi-coil, multi-slice human brain imagesfrom the fastMRI dataset. As this dataset includes a variety ofreal-world acquisitions (with different MR scanners, protocols,artifacts, contrasts, radio-frequency coils, etc) and becausevariation in each of these factors (especially the number ofcoils) would cause significant variation in the results, weselected a subset of the dataset limited to images acquiredwith 16 receive coils . This removed a parameter that wouldotherwise significantly affect variance in results and therefore,made result interpretation more straightforward. Other thannumber of coils, and ensuring no subject overlap betweentrain/validation/test sets, no other constraint was imposed onthe multi-coil human dataset. The original data were fullysampled. The accelerations (subsampling) were imposed aspost-processing steps. The choice of “16” was because it was the largest subset of the datasetand it was appropriate for an acceleration factor of R=4, and still reasonablefor R=8.
D. Evaluation metrics
Peak signal-to-noise ratio (PSNR) and structural similar-ity (SSIM) were used to assess the performance [26]. Thereconstructions were compared with a ground truth, definedas root-sum-of-squares reconstruction of fully sampled k-space data from individual channels. PSNR was calculatedas −
20 log 10(
RM SE/L ) where RMSE is the root-mean-square error and L is the dynamic range. SSIM was calculatedas (2 µ x µ y + c )(2 σ xy + c ) ( µ x + µ y + c )( σ x + σ y + c ) using an 11x11 Gaussian filter ofwidth 1.5 and c , c of 0.01, 0.03 respectively. E. Training and implementation details
Individual loss terms were normalized to be on similarscales. Training started with a focus on L1 similarity, with λ = 120 , λ = 30 , λ DC = 0 , λ f = 0 . Midway throughtraining (30 to 50 epochs), the weight balance of L1-L2 lossgradually changed to λ = 30 , λ = 120 . After 100 epochs,the focus shifted to feature loss and data consistency loss whilemaintaining the L1-L2 weights, with λ DC = 30 , λ f = 100 .The GAN was trained using 100 subjects (1600 axialslices) while the validation and test dataset each included anadditional 100 subjects, without any subject overlap betweenthe three subsets. An Adam optimizer [27] with a customizedlearning rate schedule was used. Custom python scripts wereused for GRAPPA and GAN implementations, with the GANimplemented using TensorFlow 2.2 / Keras. The network wastrained for 200 epochs using one NVIDIA Tesla V100 GPU.III. R ESULTS
Figure 3 and Figure 4 present a qualitative comparisonbetween reconstructions using regularized GRAPPA and GP-GAN. As presented in Table 1, with an acceleration factorof R=4, regularized GRAPPA resulted in PSNR=33.88dB andSSIM=0.84. The GAN improved the results to PSNR=37.65dBand SSIM=0.93. The average root-mean-square error reducedfrom 0.021 to 0.013 for R=4 and from 0.075 to 0.033 for
AVAF et al. : GRAPPA-GANS FOR PARALLEL MR IMAGE RECONSTRUCTION 4
Fig. 3. Comparing reconstruction quality at acceleration factor R=4. Left:ground truth (fully sampled root-sum-of-squares reconstruction); center: reg-ularized GRAPPA reconstruction (uniform undersampling, 8% ACS lines);right: GAN reconstruction.Fig. 4. Comparing reconstruction quality at acceleration factor R=8. Left:ground truth (fully sampled root-sum-of-squares reconstruction); center: reg-ularized GRAPPA reconstruction (uniform undersampling, 4% ACS lines);right: GAN reconstruction. TABLE IC
OMPARING AVERAGE PERFORMANCE RESULTS FOR DIFFERENT ACCELERATION FACTORS (R)
WITH REGULARIZED
GRAPPA
AND
GAN.R=4 R=8PSNR SSIM PSNR SSIMGRAPPA 33.88 0.84 22.45 0.51GAN 37.65 0.93 29.64 0.84
R=8, using GRAPPA and GAN, respectively. The increase inSSIM is due to reduced standard deviation ( σ x ) of the GANreconstruction, suggesting a higher statistical signal-to-noiseratio (SNR ∝ mean(signal) / std(noise)) using GAN.IV. D ISCUSSION
While the primary purpose of the proposed technique isreconstruction of sub-sampled k-space (i.e. addressing thealiasing artifact), the fully sampled dataset was contaminatedwith other common real-world artifacts (Gibbs artifacts, mo-tion artifacts, etc.) which were often mitigated in the finalGAN reconstruction. Figure 5 illustrates artifact suppression.Moreover, the GAN reconstruction was effective in denoisingreconstructions and improving the average statistical signal-to-noise ratio of the images. Incorporating GRAPPA into (a) (b)Fig. 5. Denoising and artifact suppression using the proposed GAN. In both(a) and (b), the left subfigures are the ground truth and the right subfiguresare the GAN reconstructions. The lower row are the zoomed-in and rescaleddetail view of the respective red boxes. the data-driven reconstruction pipeline improves the structuralfidelity of the reconstructed images, making sure that nosignificant structures are added or deleted in the final result(although some details are inevitably lost due to undersam-pling).While the dataset included acquisitions using various num-bers of receiver channels (from 8 to 23 receive channels), inorder to prevent high variance in accelerated reconstructionsdue to variance in receiver channel count, we used only asubset of the dataset including only acquisitions with exactly16 receive channels. Nevertheless, an acceleration factor ofR=8 using only 16 receive channels results in significantnoise in the GRAPPA reconstruction. By comparison, theGAN reconstructions are noticeably less noisy even with R=8acceleration.Building on previous works [28]–[31], various elements ofthe generator loss function ensure different aspects of thereconstruction fidelity. The perceptual prior imposed usingthe inception network is aimed to achieve feature level con-sistency. This ensures that prominent features of the recon-struction follow the same distribution as the target dataset.While this helps eliminate the residual aliasing artifacts, it alsocaptures and tries to replicate other real-world artifacts of thetarget dataset. The latter is mitigated by the data consistencyloss term.In future, we would like to build upon this work byintegrating a GAN with a compressed-sensing solution of theimage reconstruction problem.V. C
ONCLUSION
A generative adversarial network was used to improve thequality of accelerated MR image reconstruction using regular-ized GRAPPA. The results demonstrate significant reduction inroot-mean-square error of accelerated reconstruction comparedwith the fully sampled ground truth.VI. A
CKNOWLEDGEMENTS
The authors acknowledge funding from NIH U01EB025144, P30 NS076408 and P41 EB027061 grants.
AVAF et al. : GRAPPA-GANS FOR PARALLEL MR IMAGE RECONSTRUCTION 5 R EFERENCES [1] B. Keil and L. L. Wald, “Massively parallel MRI detectorarrays,”
Journal of Magnetic Resonance , vol. 229, pp. 75–89,Apr. 2013. [Online]. Available: https : / / linkinghub. elsevier.com/retrieve/pii/S109078071300030X.[2] N. Tavaf, R. L. Lagore, S. Jungst, et al. , “A Self-Decoupled 32Channel Receive Array for Human Brain Magnetic ResonanceImaging at 10.5T,” Sep. 2020. [Online]. Available: http://arxiv.org/abs/2009.07163.[3] G. Shajan, J. Hoffmann, G. Adriany, et al. , “A 7T Head Coilwith 16-channel dual-row transmit and 31-channel receive forpTx applications,” in ,2016. [Online]. Available: http : / / archive . ismrm . org / 2016 /2132.html.[4] G. Adriany, J. Radder, N. Tavaf, et al. , “Evaluation of a 16-Channel Transmitter for Head Imaging at 10.5T,” in , IEEE, Sep. 2019, pp. 1171–1174.[Online]. Available: https : / / ieeexplore . ieee . org / document /8879131/.[5] K. P. Pruessmann, M. Weiger, M. B. Scheidegger, et al. ,“SENSE: Sensitivity encoding for fast MRI,”
Magnetic Res-onance in Medicine , vol. 42, no. 5, pp. 952–962, Nov. 1999.[6] M. A. Griswold, P. M. Jakob, R. M. Heidemann, et al. ,“Generalized autocalibrating partially parallel acquisitions(GRAPPA),”
Magnetic Resonance in Medicine , vol. 47, no. 6,pp. 1202–1210, Jun. 2002. [Online]. Available: http : / / doi .wiley.com/10.1002/mrm.10171.[7] D. Donoho, “Compressed sensing,”
IEEE Transactions onInformation Theory , vol. 52, no. 4, pp. 1289–1306, Apr. 2006.[Online]. Available: http : / / ieeexplore . ieee . org / document /1614066/.[8] D. Liang, J. Cheng, Z. Ke, et al. , “Deep Magnetic ResonanceImage Reconstruction: Inverse Problems Meet Neural Net-works,”
IEEE Signal Processing Magazine , vol. 37, no. 1,pp. 141–151, Jan. 2020.[9] A. Torfi, R. A. Shirvani, Y. Keneshloo, et al. , “NaturalLanguage Processing Advancements By Deep Learning: ASurvey,” Mar. 2020. [Online]. Available: http://arxiv.org/abs/2003.01200.[10] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, et al. et al. , “Generativeadversarial networks,”
Communications of the ACM et al. , “DAGAN: Deep De-AliasingGenerative Adversarial Networks for Fast Compressed Sens-ing MRI Reconstruction,”
IEEE Transactions on MedicalImaging , vol. 37, no. 6, pp. 1310–1321, 2018.[13] B. Murugesan, V. R. S, K. Sarveswaran, et al. , “Recon-GLGAN: A Global-Local context based Generative Adversar-ial Network for MRI Reconstruction,” Aug. 2019. [Online].Available: http://arxiv.org/abs/1908.09262.[14] H. Emami, M. Dong, and C. K. Glide-Hurst, “Attention-Guided Generative Adversarial Network to Address AtypicalAnatomy in Modality Transfer,” Jun. 2020. [Online]. Avail-able: http://arxiv.org/abs/2006.15264.[15] M. Mardani, E. Gong, J. Y. Cheng, et al. , “Deep Gener-ative Adversarial Neural Networks for Compressive Sens-ing MRI,”
IEEE Transactions on Medical Imaging , vol. 38,no. 1, pp. 167–179, Jan. 2019. [Online]. Available: https://ieeexplore.ieee.org/document/8417964/.[16] P. Deora, B. Vasudeva, S. Bhattacharya, et al. , “StructurePreserving Compressive Sensing MRI Reconstruction using Generative Adversarial Networks,” Oct. 2019. [Online]. Avail-able: http://arxiv.org/abs/1910.06067.[17] S. U. Dar, M. Yurt, M. Shahdloo, et al. , “Prior-guidedimage reconstruction for accelerated multi-contrast mri viagenerative adversarial networks,”
IEEE Journal on SelectedTopics in Signal Processing , vol. 14, no. 6, pp. 1072–1087,2020.[18] N. Tavaf, K. Ugurbil, and P.-F. Van de Moortele, “Recon-struction of Accelerated MR Acquisitions with ConditionalGenerative Adversarial Networks,” in
International Society ofMagnetic Resonance in Medicine , 2021, p. 723.[19] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolu-tional Networks for Biomedical Image Segmentation,”
LectureNotes in Computer Science (including subseries Lecture Notesin Artificial Intelligence and Lecture Notes in Bioinformatics) ,vol. 9351, pp. 234–241, May 2015. [Online]. Available: http://arxiv.org/abs/1505.04597.[20] S. Ioffe and C. Szegedy, “Batch Normalization: AcceleratingDeep Network Training by Reducing Internal Covariate Shift,” , vol. 1, pp. 448–456, Feb. 2015. [Online]. Available:https://arxiv.org/abs/1502.03167v3.[21] C. Szegedy, V. Vanhoucke, S. Ioffe, et al. , “Rethinking theInception Architecture for Computer Vision,” in
Proceedingsof the IEEE Computer Society Conference on Computer Visionand Pattern Recognition , vol. 2016-Decem, IEEE ComputerSociety, Dec. 2016, pp. 2818–2826. [Online]. Available: https://arxiv.org/abs/1512.00567v3.[22] O. Russakovsky, J. Deng, H. Su, et al. , “ImageNet LargeScale Visual Recognition Challenge,”
International Journalof Computer Vision , vol. 115, no. 3, pp. 211–252, Sep. 2014.[Online]. Available: http://arxiv.org/abs/1409.0575.[23] C. Ledig, L. Theis, F. Huszar, et al. , “Photo-Realistic Sin-gle Image Super-Resolution Using a Generative AdversarialNetwork,”
Proceedings - 30th IEEE Conference on ComputerVision and Pattern Recognition, CVPR 2017 , vol. 2017-Janua,pp. 105–114, Sep. 2016. [Online]. Available: http://arxiv.org/abs/1609.04802.[24] F. Knoll, J. Zbontar, A. Sriram, et al. , “fastMRI: A PubliclyAvailable Raw k-Space and DICOM Dataset of Knee Imagesfor Accelerated MR Image Reconstruction Using MachineLearning,”
Radiology: Artificial Intelligence , vol. 2, no. 1,e190007, Jan. 2020. [Online]. Available: https://pubs.rsna.org/doi/abs/10.1148/ryai.2020190007.[25] J. Zbontar, F. Knoll, A. Sriram, et al. , “fastMRI: An OpenDataset and Benchmarks for Accelerated MRI,” Nov. 2018.[Online]. Available: http://arxiv.org/abs/1811.08839.[26] Z. Wang, A. C. Bovik, H. R. Sheikh, et al. , “Image QualityAssessment: From Error Visibility to Structural Similarity,”
IEEE Transactions on Image Processing ∼ lcv/ssim/..[27] D. P. Kingma and J. L. Ba, “Adam: A method for stochasticoptimization,” in ,International Conference on Learning Representations, ICLR,Dec. 2015. [Online]. Available: https://arxiv.org/abs/1412.6980v9.[28] H. Rahmaninejad, T. Pace, S. Bhatt, et al. , “Co-localizationand confinement of ectonucleotidases modulate extracellularadenosine nucleotide distributions,” PLoS Computational Bi-ology , vol. 16, no. 6, e1007903, Jun. 2020. [Online]. Avail-able: https://doi.org/10.1371/journal.pcbi.1007903.[29] K. W. Buffinton, B. B. Wheatley, S. Habibian, et al. , “In-vestigating the Mechanics of Human-Centered Soft RoboticActuators with Finite Element Analysis,” in ,Institute of Electrical and Electronics Engineers Inc., May2020, pp. 489–496.
AVAF et al. : GRAPPA-GANS FOR PARALLEL MR IMAGE RECONSTRUCTION 6 [30] S. Habibian, “Analysis and Control of Fiber-Reinforced Elas-tomeric Enclosures (FREEs),”
Master’s Theses , Jan. 2019.[Online]. Available: https : / / digitalcommons . bucknell . edu /masters theses/229.[31] S. Habibian, M. Dadvar, B. Peykari, et al. , “Design andimplementation of a maxi-sized mobile robot (Karo) forrescue missions,”