On the reconstruction accuracy of multi-coil MRI with orthogonal projections
Anna Breger, Gabriel Ramos Llorden, Gonzalo Vegas Sanchez - Ferrero, W. Scott Hoge, Martin Ehler, Carl-Fredrik Westin
OOn the reconstruction accuracy of multi-coil MRIwith orthogonal projections
Anna Breger , Gabriel Ramos Llorden , Gonzalo Vegas Sanchez -Ferrero , W. Scott Hoge , Martin Ehler , and Carl-Fredrik Westin Department of Psychiatry, Brigham and Women’s Hospital, Harvard Medical School,Boston, MA Department of Radiology, Brigham and Women’s Hospital, Boston, MA Laboratory of Mathematics in Imaging, Brigham and Women’s Hospital, Harvard MedicalSchool, Boston, MA
September 2019
Abstract
MRI signal acquisition with multiple coils in a phased array is nowa-days commonplace. The use of multiple receiver coils increases the signal-to-noise ratio (SNR) and enables accelerated parallel imaging methods.Some of these methods, like GRAPPA or SPIRiT, yield individual coilimages in the k-space domain which need to be combined to form a finalimage. Coil combination is often the last step of the image reconstruction,where the root sum of squares (rSOS) is frequently used. This straight-forward method works well for coil images with high SNR, but can yieldproblems in images with artifacts or low SNR in all individual coils. Weaim to analyze the final coil combination step in the framework of linearcompression, including principal component analysis (PCA). With twodata sets, a simulated and an in-vivo, we use random projections as arepresentation of the whole space of orthogonal projections. This allowsus to study the impact of linear compression in the image space with di-verse measures of reconstruction accuracy. In particular, the L error,variance, SNR, and visual results serve as performance measures to de-scribe the final image quality. We study their relationships and observethat the L error and variance strongly correlate, but as expected min-imal L error does not necessarily correspond to the best visual results.In terms of visual evaluation and SNR, the compression with PCA out-performs all other methods, including rSOS on the uncompressed imagespace data. a r X i v : . [ phy s i c s . m e d - ph ] N ov Introduction
Magnetic Resonance Imaging (MRI) is a unique medical imaging modality thatprovides excellent image quality without ionizing radiation, but on the otherhand is relatively slow. The acquired data, sampled in the k-space domain,corresponds to the Fourier transform of the spatial-domain MR image. Toreconstruct an accurate MR image, the total amount of k-space data that mustbe acquired to avoid reconstruction artifacts is dictated by sampling theory.A turning point in MRI reconstruction was the implementation of phased-array coils in the acquisition pipeline. With phased-array coils, the k-space datais acquired in multiple receivers/coils simultaneously. On top of increasing thesignal-to-noise (SNR) ratio, a phased-array coil acquisition allows acceleratingthe total acquisition time [22]. These methods, known as Parallel Imaging(PI) methods, exploit the fact that there exist complementary k-space datainformation in each coil/receiver. In PI, the k-space data set is undersampled,but the multiple measurements allow missing k-space samples to be recoveredthrough an inverse problem reconstruction framework. See [14] for an overviewof basic reconstruction algorithms for PI and their history.The computational costs and required memory of the reconstruction algo-rithms highly depend on the dimension of the phased array, i.e. the numberof receiver coils. Several coil compression methods have been developed thatreduce to a smaller set of virtual channels without a significant loss of SNR,see e.g. [29],[8],[26]. Moreover, PCA-based methods have shown to even have abeneficial denoising effect, e.g. [6]. In [5], the optimal linear projection for coilcompression based on the resulting SNR is derived. Note that this is related toour analysis approach, but we work in the image domain instead of the k-spacedomain.After the obligatory compression step, PI reconstruction methods can beapplied and generate results in different output spaces. Some of the methodsoperate in the image domain, e.g. SENSE [20]. Other methods operate in the k-space domain and generate a fully-sampled k-space array per each receiver/coil,e.g. GRAPPA or SPIRiT [15, 19]. To form a spatial-domain image that can beused for analysis, quantification, or visualization purposes, the k-space domaindata is inversely Fourier transformed and the resulting individual coil imagesneed to be combined into a single final image.In [22], optimal methods to combine the arrays from phased array elementshave been developed. These methods rely on detailed knowledge of the receivesensitivity of each coil. In practice, the exact position and sensitivity of eachcoil is not always known or not possible to estimate because of physical lim-itations. To overcome this issue, a method that combines the data withoutdetailed knowledge of the coils, while preserving a high SNR, is desirable. Be-cause of this, the root sum of squares (rSOS) method has become the standardmethod of combining multi-coil images in MRI [22, 25]. For arrays with highSNR, the rSOS yields nearly optimal reconstruction, whereas problems ariseif all coils yield low SNR. Especially data with artifacts are problematic sincerSOS weights them equally to the non-defective parts.2oise is ever present in MRI imaging and is dominated by two sources: ther-mal noise in the receiver apparatus and the physiological noise from patient’sbody itself. Moreover, the SNR in the coil images depends highly on the fieldstrength, scanner hardware, and data acquisition modality (e.g. T1/T2/diffusionweighted). Extensive statistical noise analysis can be found in [1]. In [24, 18],PCA was used to jointly reconstruct multi-coil data from diffusion MRI exploit-ing the expected redundancy of data acquired with different gradient directions.Under the framework of random matrix theory, authors derived practical rulesto accurately nullify noise-only principal components. This way, dMRI data canbe effectively denoised while preserving the important information.In this work, we aim to analyze the coil combination in a comprehensive wayby comparing different performance measures. To address coil combination in-dependent of prior knowledge, we will study the rSOS in the framework of linearcompression via orthogonal projections. Random projections will serve us as atool to study the correlation between L reconstruction error and voxel varianceof the varying reconstructions. Correlation analysis with random orthogonalprojections has been studied in a related context in [4], yielding an underly-ing understanding of the relation between important information preservationfeatures in data combination and compression.Besides the correlation analysis, we observe that optimal L reconstructionerror, i.e. mean squared error (MSE), does not necessarily correspond to optimalvisual evaluation of the reconstructed images. This corresponds to describedproblems when measuring image quality by the MSE, see e.g. [27, 13, 28].Nevertheless, the often used peak signal-to-noise ratio (PSNR) is based on thecomputation of the MSE. Here, we will additionally evaluate the reconstructionsregarding the SNR and visualization of the images. The results confirm animprovement by compressing image space data with PCA.The outline is as follows: first we will explain how rSOS can be interpretedin the context of orthogonal projections. Moreover, we will describe samplesof random orthogonal projections, that shall serve us as coverings enabling nu-merical experiments. Then we describe the two data sets, a simulated and anin-vivo, that we use for experimental investigations. The simulated data setallows us to compare the behavior with different noise levels in the coils. Inthe results section, we show scatter plots describing the relation between the L reconstruction error and the voxel variance in reconstructed magnitude images,as well as comparing it with the SNR and visual outcomes. Finally, we willinterpret and discuss the results in Section 5. We aim to study the impact of linear compression in coil combination withrSOS. Such a combination step is necessary for phased-array data that has beenprocessed with GRAPPA, SPIRiT, or other PI methods in k-space. Beforecombining the fully sampled, multidimensional image space data as the final stepof the reconstruction pipeline, we include linear compression with orthogonal3rojections.The common reconstruction pipeline including our linear compression canbe summarized as follows:ˆ y i ∈ C d GRAPPA −−−−−→ ˆ x i ∈ C d IFFT + abs −−−−−−→ x i ∈ R d projection −−−−−→ px i ∈ R k rSOS −−−→ (cid:107) px i (cid:107) ∈ R where ˆ y i corresponds to an undersampled k-space voxel, ˆ x i is fully sampledin k-space after some PI reconstruction (e.g. GRAPPA) and x i is the imagespace data. Our analysis takes place on the image space data in R d that issubsequently projected to R k with k < d . As the basis of our analysis we will use random orthogonal projections yieldingdifferent linear coil compressions. We will work with image data consisting of d channels that shall be combined into one final magnitude image. To do so,we study the space of k -dim linear subspaces of R d , which can be identified byorthogonal projections G k,d = { p ∈ R d × d : p = p, p T = p, rank( p ) = k } , (1)called the Grassmannian.Let x = { x i } mi =1 ∈ R d be an image space data set with m voxels measured by d coils. Then, the final image volume is given by (cid:107) px (cid:107) with p in G k,d , where d is the fixed number of channels and k < d varies. This corresponds to projectingthe d coil channels to different dimensions k and computing root sum of squares(rSOS) subsequently. Note that we can study the rSOS itself in this context,since rSOS( x ) = (cid:107) x (cid:107) and it holds for all p ∈ G k,d that the expectation value E [ (cid:107) px i (cid:107) ] = c · (cid:107) x i (cid:107) ∀ x i ∈ R d , (2)with c = (cid:0) Γ( k +12 )Γ( d ) (cid:1) / (cid:0) Γ( k )Γ( d +12 ) (cid:1) , where Γ denotes the Gamma function.This is based on the Chi distribution and the fact that the length of arandom unit vector projected onto a fixed k -dimensional subspace has the samedistribution as the length of a unit vector in R d being projected onto a random k -dimensional subspace (see e.g. [12]). Remark 2.1
The equality (2) allows us to analyze the rSOS itself in the frame-work of orthogonal projections, i.e. no added coil compression in the imagespace. Summing up the projected voxels obtained by a reasonably big sample setof random orthogonal projections p = { p l } nl =1 ∈ G k,d , yields approximately therSOS combined voxel up to the constant c , i.e. n n (cid:88) l =1 (cid:107) p l x i (cid:107) ≈ c · (cid:107) x i (cid:107) . (3) Since we linearly rescale the final image volumes between [0,1] to enable errorestimation with the ground truth, the constant is negligible. Note that also prin-cipal component analysis (PCA) yields an orthogonal projection that lays in the rassmannian manifold and therefore can be studied in that context. Moreover,we will use random orthogonal projections, i.e. projections p ∈ G k,d distributedaccording to the orthogonally invariant probability measure µ k,d , as samples oforthogonal projections, see e.g. [2], [3]. The L error will serve us as a measure of accuracy, when comparing the groundtruth data with the final reconstructions, i.e. the combined image space data x = { x i } mi =1 ∈ R d . Note that w.l.o.g. x contains here just the voxels in theregion of interest (discarding the background) and not the full image volume.For a fixed projection p ∈ G k,d , the error between some provided ground truth y = { y i } mi =1 and the combined magnitude image voxels (cid:107) px (cid:107) := {(cid:107) px i (cid:107) } mi =1 ,is then given by Err( y, (cid:107) px (cid:107) ) := 1 m m (cid:88) i =1 (cid:0) y i − (cid:107) px i (cid:107) (cid:1) . (4)We aim to study the relation between the reconstruction error and the variancein the reconstructed magnitude images, which can be interpreted as contrast innoise-free images. The variance of the final magnitude images depending on theprojection method can be computed byVar (cid:0) (cid:107) px (cid:107) (cid:1) = 1 m ( m − (cid:88) i Let the covering radius of a finite set { p , . . . , p n } ⊂ G k,d bedenoted by ρ ( { p l } nl =1 ) := sup p ∈G k,d min ≤ l ≤ n (cid:107) p − p l (cid:107) F , (7) where (cid:107) · (cid:107) F is the Frobenius norm. { p l } nl =1 represents the entire space G k,d : it yields smaller holes and the pointsare better spread.Let µ k,d denote the normalized Riemannian measure on G k,d as before. Ac-cording to [21], the expectation of the covering radius ρ of n random points { p j } nj =1 , independent identically distributed according to µ k,d , satisfies E ρ ∼ n − k ( d − k ) log( n ) k ( d − k ) . (8)Following the definition of asymptotically optimal covering in [3], the expec-tation yields an optimal covering radius up to a logarithmic factor log( n ) k ( d − k ) .To remain flexible in the dimension of G k,d , i.e. the choice of k and d , and thenumber of projections n , we will work here with random projections distributedaccording to µ k,d rather than constructing optimal covering sequences as in[3]. These random orthogonal projections can be efficiently computed by the QR decomposition of a matrix M = QR with independent standard normaldistributed entries [7, Theorem 2.2.2].In the following we will use finitely many samples of random projections forthe experimental analysis. We will run our numerical analysis on two different MR data sets: a simulatedT1-weighted data set from brainweb ([10], [17], [11]) and an in-vivo data setfrom a head coil receiver. Simulation experiments were conducted to assess the quality in image recon-struction for different types of projections in a controlled, rigorous manner. Todo so, first, a ground-truth volume was created with the popular numericalsimulator BrainWeb [9]. A (magnitude) multi-slice T1-weighted volume wassimulated with a Spoiled Fast Low Angle Shot (SFLASH) sequence with thefollowing parameters: TR/TE = 20 /10 ms, flip angle of 90 degrees and ETL= 1. Matrix size: 181 × × 76 with isotropic voxel size of 1 mm.Next, simulated images with 32 channels, mimicking a 32-channel coil ac-quisition, were created. First, synthetic coil sensitivity profiles were simulatedassuming a smooth Gaussian profile [1]. Voxel-wise multiplication of those coilsensitivities by the simulated ground-truth image creates the 32 coil-based im-ages. Uncorrelated complex Gaussian noise with zero mean and standard de-viation σ was added. Finally, to simulate a magnitude-based acquisition, theabsolute value of 32 noisy images were taken. The value of σ was chosen differ-ently to recreate experiments with different noise levels. We use the symbol ∼ to indicate that the corresponding equalities hold up to a positiveconstant factor on the respective right-hand side. .2 In-vivo data MR data was acquired in-vivo from a healthy volunteer using a Siemens (Erlan-gen, Germany) 3T Prisma equipped with a 32-channel head coil receiver. AnEcho-Planar diffusion sequence was employed to acquire 24 slices in a 2mm iso-tropic volume, with slice-thickness 2mm (TR=3.2 sec, TE=85ms, flip-angle=90,matrix size 128x128, FOV: 256mm by 256mm). A T2-weighted volume wasacquired, with no diffusion weighting (a ”b=0” image), followed by 62 vol-umes acquired using a single repeated diffusion vector and diffusion settingof b=1000. The measured EPI data was reconstructed using Dual-PolarityGRAPPA (DPG) to minimize Nyquist ghosts, see [16]. In-plane acceleration ofthe original data was R = 2. The ground-truth image was formed by motion-correcting the 62-volume diffusion-weighted series, using the Advanced Normal-ization Tools (ANTs) library [23], and averaging across time. The first diffu-sion direction was studied regarding linear coil combination and compared tothe computed the ground-truth. Few outliers have been removed by using the99 . . 01% to the maximum of the usedpercentile. Both studied image volumes consist of phased-array data with 32 channels,therefore we work with projections p in G k,d with d = 32 and varying k . In thefollowing scatter plots we see how the L reconstruction error (4) relates to thevoxel variance (5) and state corresponding mean SNR values in the visualization.Each point in the plots represents a reconstruction with linear compressionand rSOS, yielding a final magnitude image volume as described in the previoussections. The symbols + correspond to a projection p ∈ G k, , i.e. (cid:107) px (cid:107) , andthe colors to the different dimensions k : + p ∈ G , , + p ∈ G , , + p ∈ G , , + p ∈ G , , + p ∈ G , , + p ∈ G , . As described in Section 2.3 the projections p serve as a covering of the underlyingspace and are chosen randomly according to the orthogonally invariant proba-bility measure µ k,d . For the simulated data set (see Section 3.1) n = 500 pro-jections are randomly chosen for every space G k, with k ∈ { , , , , , } ,for the in-vivo data set (see Section 3.2) n = 1000.In the scatter plots the symbol (cid:3) corresponds to the rSOS, i.e. (cid:107) x (cid:107) , and ◦ to the compression with the orthogonal projection provided by PCA. The colorscorrespond to the different spaces G k, as stated above.Figure 1 contains the scatter plots for 4 different noise levels in the simulateddata set and Figure 2 shows corresponding image cross-sections. Figure 3 showsthe scatter plot and image cross-sections of the in-vivo data set.7 a) SNR = 10.64 (b) SNR = 5.45(c) SNR = 2.77 (d) SNR = 1.96 Figure 1: Scatter plots for the simulated data set with different noise levels,showing the L reconstruction error (4) and variance (5) of the final images ob-tained by combining the coils with rSOS and compression by random projectionsand PCA in G k, . A varying amount of Gaussian noise was added in each coilassuming no correlation [1]. The SNR value corresponds to the mean over allvoxels in the reconstructed rSOS image volume without the linear compression.8igure 2: Cross-sectional image corresponding to the scatter plots in Figure 1for the simulated data set with different noise levels. The left column shows thefirst channel of the simulated noisy 32-dim coil array before the coil combinationstep. The second column shows the final image obtained by rSOS includingcompression with the random projection p ∈ G , that yields the minimal L reconstruction error (4). The third column shows the image provided by rSOSwithout compression. The second last columns show the final images when usingPCA in G , and G , for compression, yielding the highest mean SNR. Wecan directly see that the lowest L error does not directly correspond to thehighest SNR. Moreover, the visual evaluation suggests that the SNR describesthe image quality more accurately. 9igure 3: Left - Scatter plot for the in-vivo data set showing the reconstruc-tion error (4) and variance (5) obtained by combining the coils with rSOS andcompressing with random projections and PCA in G k, . Right - Final cross-sectional images corresponding to the compression methods in the scatter plot.The SNR value corresponds to the mean over all voxels in the reconstructedimage volume, see (6). In Figure 1 and 3 we illustrate the scatter plots of the described linear com-pression methods regarding the two data sets. To simplify the visualization wedisplay the spaces G k, only for k ∈ { , , , , , } . In all plots we cansee that the smaller the dimension k , the more the projections are spread re-garding Err( y, (cid:107) px (cid:107) ) and Var (cid:0) (cid:107) px (cid:107) (cid:1) ; the reconstructions including randomprojections from G , are widely distributed, whereas using the projections in G , are clustered closely around the rSOS. The smaller the k , the more orig-inal information can be randomly dismissed: the preservation and loss of theoriginal information varies less for image space data compressed by a randomprojection p ∈ G , in comparison to compression by p ∈ G , .For lower noise levels in the simulated data, Figure 1 (a)-(b), we can directlysee that the correlation between the L reconstruction error and the variancewithin the final combined images is very strong. Since variance can be inter-preted as contrast in images with high SNR, it shows that high contrast directlyrelates here to a good reconstruction in the L manner. However, the higherthe noise level, the more noise is also described in the measured variance andtherefore it cannot directly be interpreted as contrast any more. In the simu-lated data set, Figure 1 (c)-(d), this can be seen in the change of correlationbehavior, where higher variance does not relate for all dimensions k to a lowerreconstruction error. In Figure 1 (d) the projections from the spaces G k, with k = { , , , } do not show any connection between the variance and L k = 1 there is still some negative correlation, indicat-ing that the randomization leads to several poor reconstructions where noise issecondary in comparison to issues with contrast. The scatter plot correspondingto the in-vivo data (Figure 3) shows similar behavior. Because of this varyingbehavior, the variance itself does not act as a useful measure of reconstructionaccuracy in this experimental setup.We can see that in all experiments rSOS itself, as well as including PCAcompression, yield low L reconstruction errors, but are always outperformedby some random projections. Nevertheless, regarding SNR and visualization,these reconstructed images with random compression cannot compete with PCAcompression or rSOS. That indicates some contradicting behavior, when mea-suring the reconstruction performance with the L error versus the SNR. Indeed,when using PCA for the simulated data set, the L error is the lowest in G , ,whereas highest SNR is always achieved by PCA in G , , which contradictoryyields the worst L error. Also in the in-vivo data set the highest SNR wasachieved by using PCA in G , , which again does not correspond to the low-est L error. Following the visual results it seems that the SNR describes herethe visual performance better than the L error. The compression with PCA in G , yields the highest SNR in all experiments and therefore outperforms rSOSwithout compression consistently. Compression with the standard PCA in G , yields predominantly better results than rSOS, but yields an insufficient visualresult on the in-vivo data. Just using the first eigendirection yields a strongcompression that looses too much important information here. Based on random orthogonal projections we have shown a numerical investi-gation on reconstruction accuracy regarding rSOS coil combination with linearcompression. Two different MR data sets were used for our experiments; asimulated T1 weighted data set with varying amount of noise and an in-vivodiffusion weighted data set. We used diverse measures of performance to eval-uate the image quality of the reconstructions. For the lower noise levels in thesimulated data, the L reconstruction error yields strong correlation with thevariance, but the behavior changes for higher noise levels and the in-vivo data.Moreover, measuring L error and SNR acts contradictory in terms of optimal-ity and in these cases we observe that the visual evaluation corresponds moreto the SNR. The highest SNR values were achieved by incorporating PCA ascompression before using rSOS, outperforming rSOS with no compression inall experiments. This clearly suggests to use PCA on image space data beforecomputing the final coil combination with rSOS, yielding a beneficial denoisingeffect with higher SNR.Future work shall include related experiments in the k-space before PI re-construction, where linear compression is highly beneficial to save subsequent11omputational costs. The work was partly funded by the Austrian Marshall Plan Foundation andVienna Science and Technology Fund (WWTF) through project VRG12-009. References [1] S. Aja-Fern´andez and G. Vegas S´anchez-Ferrero. Statistical Analysis of Noise in MRI .01 2016.[2] J. S. Brauchart, A. B. Reznikov, E. B. Saff, I. H. Sloan, Y. G. Wang, and R. S. Womersley.Random Point Sets on the Sphere - Hole Radii, Covering, and Separation. ExperimentalMathematics , 27(1):62–81, 2018.[3] A. Breger, M. Ehler, and M. Gr¨af. Points on manifolds with asymptotically optimalcovering radius. Journal of Complexity , 48:1–14, 2018.[4] A. Breger, J. Orlando, P. Har´ar, M. Doerfler, S. Klimscha, C. Grechenig, B. Gerendas,U. Schmidt-Erfurth, and M. Ehler. On orthogonal projections for dimension reductionand applications in augmented target loss functions for learning problems. Journal ofMathematical Imaging and Vision (JMIV) , 2019.[5] M. Buehrer, K. P. Pruessmann, P. Boesiger, and S. Kozerke. Array compression for MRIwith large coil arrays. Magnetic Resonance in Medicine , 57(6):1131–1139, 2007.[6] Y. Chang and H. Wang. Kernel principal component analysis of coil compression inparallel imaging. Computational and Mathematical Methods in Medicine , 2018:1–9, 042018.[7] Y. Chikuse. Statistics on special manifolds . Lecture Notes in Statistics. Springer, NewYork, 2003.[8] A. Chu and D. Noll. Coil compression in simultaneous multislice functional MRI withconcentric ring slice-GRAPPA and SENSE. Magnetic resonance in medicine , 76(4):1196–1209, oct 2016.[9] C. A. Cocosco, V. Kollokian, R. K.-S. Kwan, G. B. Pike, and A. C. Evans. Brainweb:Online interface to a 3d mri simulated brain database. In NeuroImage . Citeseer, 1997.[10] C. A. Cocosco, V. Kollokian, R.-S. Kwan, G. Pike, and A. Evans. Brainweb: Onlineinterface to a 3d mri simulated brain database. NeuroImage , 5:425, 1997.[11] D. L. Collins, A. P. Zijdenbos, V. Kollokian, J. G. Sled, N. J. Kabani, C. J. Holmes,and A. C. Evans. Design and construction of a realistic digital brain phantom. IEEETransactions on Medical Imaging , 17(3):463–468, June 1998.[12] S. Dasgupta and A. Gupta. An elementary proof of a theorem of johnson and linden-strauss. Random Structures & Algorithms , 22(1):60–65, 2003.[13] B. Girod. Psychovisual aspects of image processing: What’s wrong with mean squarederror? In Proceedings of the Seventh Workshop on Multidimensional Signal Processing ,pages P.2–P.2, Sep. 1991.[14] M. Griswold. Basic Reconstruction Algorithms for Parallel Imaging. In Parallel Imagingin Clinical MR Applications , pages 19-36. Springer, Berlin, Heidelberg, 2007.[15] M. A. Griswold, P. M. Jakob, R. M. Heidemann, M. Nittka, V. Jellus, J. Wang, B. Kiefer,and A. Haase. Generalized autocalibrating partially parallel acquisitions (GRAPPA). Magnetic Resonance in Medicine , 47(6):1202–1210, 2002.[16] W. S. Hoge and J. R. Polimeni. Dual-polarity GRAPPA for simultaneous reconstructionand ghost correction of EPI data. Magnetic Resonance in Medicine , 76(1):32–44, 2016. 17] R. K. Kwan, A. C. Evans, and G. B. Pike. Mri simulation-based evaluation ofimage-processing and classification methods. IEEE Transactions on Medical Imaging ,18(11):1085–1097, Nov 1999.[18] G. Lemberskiy, S. Baete, J. Veraart, T. M Shepherd, E. Fieremans, and S. Novikov.Achieving sub-mm clinical diffusion mri resolution by removing noise during reconstruc-tion using random matrix theory. Proceedings of the International Society for MagneticResonance in Medicine , 27, 2019.[19] M. Lustig and J. M. Pauly. SPIRiT: Iterative self-consistent parallel imaging recon-struction from arbitrary k-space. Magnetic resonance in medicine , 64(2):457–471, aug2010.[20] K. P. Pruessmann, M. Weiger, M. B. Scheidegger, and P. Boesiger. SENSE: Sensitivityencoding for fast MRI. Magnetic Resonance in Medicine , 42(5):952–962, 1999.[21] A. Reznikov and E. B. Saff. The Covering Radius of Randomly Distributed Points on aManifold. International Mathematics Research Notices , 2016(19):6065–6094, 12 2015.[22] P. B. Roemer, W. A. Edelstein, C. E. Hayes, S. P. Souza, and O. M. Mueller. The NMRphased array. Magnetic Resonance in Medicine , 16(2):192–225, 1990.[23] N. J. Tustison, Y. Yang, and M. Salerno. Advanced normalization tools for cardiacmotion correction. In O. Camara, T. Mansi, M. Pop, K. Rhode, M. Sermesant, andA. Young, editors, Statistical Atlases and Computational Models of the Heart - Imagingand Modelling Challenges , pages 3–12, Cham, 2015. Springer International Publishing.[24] J. Veraart, D. S. Novikov, D. Christiaens, B. Ades-Aron, J. Sijbers, and E. Fieremans.Denoising of diffusion mri using random matrix theory. NeuroImage , 142:394–406, 112016.[25] D. O. Walsh, A. F. Gmitro, and M. W. Marcellin. Adaptive reconstruction of phasedarray MR imagery. Magnetic Resonance in Medicine , 43(5):682–690, 2000.[26] J. Wang, Z. Chen, Y. Wang, L. Yuan, and L. Xia. A feasibility study of geometric-decomposition coil compression in mri radial acquisitions. Computational and Mathe-matical Methods in Medicine , 2017:1–9, 01 2017.[27] Z. Wang, A. C. Bovik, and L. Lu. Why is image quality assessment so difficult? In , volume 4,pages IV–3313–IV–3316, May 2002.[28] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Image quality assessment:from error visibility to structural similarity. IEEE Transactions on Image Processing ,13(4):600–612, April 2004.[29] T. Zhang, J. M. Pauly, S. S. Vasanawala, and M. Lustig. Coil compression for acceleratedimaging with Cartesian sampling. Magnetic Resonance in Medicine , 69(2):571–582, 2013., 69(2):571–582, 2013.