On Variant Strategies To Solve The Magnitude Least Squares Optimization Problem In Parallel Transmission Pulse Design And Under Strict SAR And Power Constraints
Andres Hoyos Idrobo, Pierre Weiss, Aurélien Massire, Alexis Amadon, Nicolas Boulant
11 On Variant Strategies To Solve The MagnitudeLeast Squares Optimization Problem In ParallelTransmission Pulse Design And Under Strict SARAnd Power Constraints
A. Hoyos-Idrobo, P. Weiss, A. Massire, A. Amadon, N. Boulant
Abstract —Parallel transmission is a very promisingcandidate technology to mitigate the inevitable radio-frequency (RF) field inhomogeneity in magnetic reso-nance imaging (MRI) at ultra-high field (UHF). Forthe first few years, pulse design utilizing this techniquewas expressed as a least squares problem with crudepower regularizations aimed at controlling the specificabsorption rate (SAR), hence the patient safety. Thisapproach being suboptimal for many applications sen-sitive mostly to the magnitude of the spin excitation,and not its phase, the magnitude least squares (MLS)problem then was first formulated in 2007. Despiteits importance and the availability of other powerfulnumerical optimization methods, the MLS problemyet has been faced almost exclusively by the pulsedesigner with the so-called variable exchange method.In this paper, we investigate various two-stage strate-gies consisting of different initializations and nonlinearprogramming approaches, and incorporate directly thestrict SAR and hardware constraints. Several schemessuch as sequential quadratic programming (SQP), in-terior point (I-P) methods, semidefinite programming(SDP) and magnitude squared least squares (MSLS)relaxations are studied both in the small and large tipangle regimes with RF and static field maps obtainedin-vivo on a human brain at 7 Tesla. Convergence androbustness of the different approaches are analyzed,and recommendations to tackle this specific problemare finally given. Small tip angle and inversion pulsesare returned in a few seconds and in under a minuterespectively while respecting the constraints, allowingthe use of the proposed approach in routine.
Index Terms —RF parallel transmission, Magneticresonance imaging, Mathematical programming, opti-mization.
I. Introduction
One of the main purposes of Ultra High Field (UHF)magnetic resonance imaging (MRI) is to improve spatialresolution, thanks to an increased signal to noise ratio(SNR). The applicability of most MRI sequences however
A. Hoyos-Idrobo, A. Massire, A. Amadon and N. Boulant arewith CEA, I2BM, NeuroSpin, UNIRS, Gif sur Yvette, France [email protected]
P. Weiss is with ITAV- USR3505, Université de Toulouse, CNRS,France [email protected]
The research leading to these results has received funding fromthe European Research Council under the European Union’s SeventhFramework Program (FP7/2013-2018) / ERC Grant Agreement n.309674. is challenged due to enhanced non-uniformities in thetransmit-sensitivity and off-resonance profiles [1]. If notaddressed, these can yield zones of important SNR lossesacross the images, detrimental to diagnosis. Over the lastfew years, a lot of research has been devoted to solve theabove-mentioned problems, leading to an assortment ofnew powerful tools including shaped pulses [2], [3], radio-frequency (RF) shimming [4] and transmit SENSE paralleltransmission (pTX) [5], [6], [7]. While RF shimming maybe useful for very specific applications [8], due to its ver-satility pTX so far has proved to be almost indispensableto tackle the RF and static field inhomogeneity problemat UHF for specific absorption rate (SAR) demandingsequences [9], [10]. Due to the large number of degrees offreedom pTX provides, there can be many different SARpatterns for a given RF pulse performance. The goal of theRF pulse designer in general then is to find the patternthat satisfies the SAR guidelines [11] with the best pulseperformance.Most often, SAR constraints in RF pulse designhave either been addressed indirectly through the useof Tikhonov power regularization factors [12], [13],[14], [15], [16] or by focusing on a more tractable, hencesignificantly smaller, subset of constraints [17], [18]. Whilethe first set of approaches requires the tuning of someparameters, the second one generally does not encompassthe complexity of the full SAR spatial distribution. Itwas not until recently that the crucial development ofthe virtual observation points (VOPs) [19], [20] made theproblem considerably more tractable and facilitated thetreatment of all the SAR constraints in RF pulse designutilizing pTX. In [21], the authors could thereby studymore thoroughly the trade-offs between the differentconstraints and the pulse performance in 2D applications.Furthermore, the harder but more beneficial magnitudeleast squares (MLS) formulation [22], [23] of the pulsedesign problem was to some extent tackled in the latterwork either by identifying a suitable target phase andsolving the corresponding least squares problem, or bylooping over different least squares problems, the so-calledvariable exchange method [24]. In most cases otherwise,the MLS problem was not addressed. While this can bejustified perhaps for spiral k-space trajectories where therank of the matrix in the linear system is significant, this a r X i v : . [ phy s i c s . i n s - d e t ] N ov can lower pulse performance substantially for more sparsetrajectories such as spokes [23] and k T -points [25].The MLS problem in RF pulse design is equivalent tothe phase retrieval problem where a complex signal issought based on the knowledge of magnitude measure-ments. It occurs in other branches of physics such as X-ray crystallography imaging [26], diffraction imaging [27]and microscopy [28]. The variable exchange method usedin the MRI community is identical with the Gerchberg-Saxton algorithm published in 1972 [29]. Yet, despite theimportance of this problem in RF pulse design and itsknown limitations [30], its use has been ubiquitous in thepulse designers’ community [21], [22], [23], [25].In this work, we investigate other strategies to solvethe MLS problem in 3D, both in the small and thelarge flip angle (FA) regimes, based on RF field mapsmeasured in-vivo on a human brain at 7 Tesla. Thestrategies consist of two stages: an initialization and anonlinear programming approach [31]. Moreover we incor-porate all 10-g and global SAR constraints using VOPs,as well as peak and average power constraints. Due to thenonconvexity of the problem, there is no proof that wefind the global minimum. However, by exploring differenttechniques such as sequential quadratic programming,interior point methods, semidefinite programming andmagnitude squared least squares relaxations, experiencecan be built and greater confidence can be gained to finallymake useful recommendations. We first present the generalcontext, by defining the mathematical problem and somecomputational tricks we used to make the problem moretractable. For the sake of clarity and completeness, wethen briefly describe the theory behind each techniqueinvestigated. Their corresponding performance, executiontime and robustness are then presented and discussed. II. Theory
The physical examples we shall work with are non-selective k T -points pulses [25] with 30 ◦ and 180 ◦ targetflip angles. The k-space trajectories in both cases were notoptimized and consisted respectively of 5 and 7 k T -pointssymmetrically located around the centre. The number of k T -points and the number of channels are respectivelycalled N k T and N c (here equal to 8) throughout. Theflip angle (MLS) homogenization problem in the small tipangle approximation (STA) [32], [33] is written as:min x ∈ C p k| Ax | − θ k , (1)where θ denotes the target FA (in rads), A ∈ C N × p thespins’ dynamics matrix only in the brain region (numberof voxels N = 12 000, number of columns p = N c × N k T ),and x the concatenated RF waveforms of the N c differentchannels. The elements of A are given by a m, ( j − N c + n = isB ,n ( r m ) exp( i h r m , k j i )exp ( iγ ∆ B ( r m ) ( T − ( j − / T s )) . For computational reasons that will appear shortly, andcontrary to general practice [7], [13], [14], [15], the elements for each row are ordered first by channel and then bytime. Above, ∆ B corresponds to the static field offset(in Tesla), γ is the gyromagnetic ratio, B ,n ( r m ) is the B (in µT ) RF field at location r m for maximum voltage(here 180 Volts) corresponding to the n th channel and T is the total pulse duration. The k-space trajectory k ( t )is equal to the time-reversed integration of the gradientwaveforms to be played during excitation. The index j thus labels the k T -points. Each sub-square pulse is 0.2 mslong for the 30 ◦ target and 0.5 ms long for the 180 ◦ target(duration T s above). The scalar s in the expression as aresult is the FA (in rads/ µT ) achieved for such a sub-pulsewith peak RF amplitude of 1 µT . The time symmetry ofthe sub-pulses in the low FA regime also implies the 1 / B evolution in the same expression.To address the SAR constraints, and for the sake ofconvenience, we use the Q matrices commonly used forSAR calculations [17]. For a single k T -point, these are Q ( r ) = σ ρ T s T R (cid:0) E Hx E x + E Hy E y + E Hz E z (cid:1) ( r ) , (2)where σ and ρ are the conductivity and density re-spectively, T R is the repetition time and E x , E y , E z are the components of the complex electric field rowvector at position r along the respective directions forthe maximum voltage available on each channel, e.g. E x ( r ) = [ E x, ( r ) , E x, ( r ) , . . . , E x,N c ( r )]. The superscript H throughout denotes Hermitian conjugation. Here, wetake T R such that the overall duty cycle for the 30 ◦ and 180 ◦ pulses are 10 % and 0.25 % respectively. Fora single k T -point the SAR at location r for a pulse shape x that way is SAR( r ) = x H Q ( r ) x . Equation (2) then issimply averaged over 10-g of contiguous tissue to obtainthe corresponding Q g ( r ) matrices. Strictly speaking,thus the number of 10-g SAR constraints is equal to thenumber of voxels in the head model. In the human headmodel we describe later on, it is roughly equal to 37 000.To make the problem more tractable, we have used thecompression model described in [19]. This model returnsa set of VOPs with corresponding Q V OP s matrices whichguarantees that ∃ i ∈ V OP s so that
SAR g,i ≤ max r ∈ Ω ( SAR g ( r )) ≤ SAR g,i + (cid:15) G SAR global , (3)where Ω denotes the ensemble of all voxels. Respectingthe constraints over the VOPs with a certain pre-definedtolerance hence guarantees to satisfy the constraints inevery voxel. The number of VOPs, here denoted as N V OP s throughout for the sake of generality, depends on (cid:15) G , themodel and the experimental set-up. In this study, we havetaken (cid:15) G = 1, which returned a set of 490 VOPs. Amatrix Q G was also used to deal with the constraint onglobal SAR. In addition we incorporate peak power andaverage power constraints for each channel (again takinginto account the duty cycles). Finally the optimizationproblem for the 30 ◦ target FA can be summarized as the following:min f ( x ) := k| Ax | − θ k x ∈ C p ,c i ( x ) ≤
10 W / kg , i ∈ { , . . . , N V OP s } ,c G ( x ) ≤ . / kg ,c pw,k ( x ) ≤
10 W , k ∈ { , . . . , N c } ,c A,j ( x ) = | x j | ≤ , j ∈ { , . . . , N c × N k T } . (4)The functions c i , c G , c A,j and c pw,k are quadratic func-tions. They denote the 10-g SAR constraints over theVOPs (calculated with Q V OP s ), the global SAR constraint(calculated with Q G ), the amplitude and the averagepower for the k th channel (here taken as 10 W) constraintsrespectively. The values for the SAR constraints abovecorrespond to the guidelines issued by the InternationalElectrotechnical Commission (IEC) [11]. For the inversionpulse, the FA in the objective function in (1) is no longerexpressed as a linear relationship but is obtained via thenonlinear function bl (for Bloch):min f ( x ) := k| bl ( x ) | − θ k x ∈ C p ,c i ( x ) ≤ / kg , i ∈ { , . . . , N V OP s } ,c G ( x ) ≤ / kg ,c pw,k ( x ) ≤ , k ∈ { , . . . , N c } ,c A,j ( x ) ≤ , j ∈ { , . . . , N c × N k T } . (5)Note that the SAR and average power thresholds hereare different than in problem (4). This is because inversionpulses are often used in combination with many additionallow FA pulses and that both participate in yielding aneffective SAR pattern and an average power. An importantexample is the MPRAGE sequence, where the duty cycleof that pulse indeed is on the order of 0.25 % ( T R ’ III. Methods
A. Head model, B and ∆ B maps. The numerical head model we used for the SAR cal-culations is the one described in [34] whose surface-basedrepresentation was obtained from the voxel-based model reported in [35]. Such a representation was a necessarystep to be able to run finite element electromagneticsimulations in HFSS (Ansys, Canonsburg, PA, USA) withour 8-channel pTX coil tuned and matched at 297 MHzto correspond to the proton Larmor frequency at 7 Tesla.Electric fields computed by HFSS were projected ontoa 5 × × Cartesian grid and used to buildup the Q matrices of (2). The head model (illustratedin Fig. 1(a)) contains 20 anatomical structure entitieswith corresponding electric properties, which added upto around 37 000 voxels. The RF and static field mapswere acquired at 7 Tesla on a healthy adult volunteer,for which approval was obtained from our institutionalreview board. The same dataset was used in some ofour earlier work [25]. For both RF transmission andreception, a home-made transceiver-array head coil wasused (Fig. 1(b)), which consists of eight stripline dipolesdistributed every 42 . ◦ on a cylindrical surface of 27.6-cm diameter, leaving a small open space in front ofthe subject’s eyes. Relative B maps were first acquiredusing 8 small tip angle FLASH sequences [36], with asmall T nonlinearity correction. Two reference actualFA acquisitions [37] were then acquired to convert theprevious data into absolute maps, with additional echoesin the first repetition time to monitor the ∆ B evolution.Matrix size was 48 × ×
32 with isotropic resolution of 5mm. These data allowed the encoding of the A matrixin (1). Due to the relative smoothness of the B and∆ B maps, significant differences in homogeneity whendealing with higher resolution maps are not expected.Finally, although the direct link between the resolutionand the number of VOPs is not known to the authors,a higher resolution can lead to increased accuracy of theSAR results but at the possible expense of computationalspeed. (a) (b)Figure 1. Head model (a) and 8-channel pTX coil (b). Onlysubcutaneous tissues are shown for illustration in a). Both modelsof the coil and the head were imported into HFSS to perform finiteelement electromagnetic simulations. Electric fields were returned foreach voxel of the head for a given input power on each transmitelement. The electric fields were then used to build the Q matricesin (2) and the VOPs [19], [20] . B. Algorithms
The nonconvex problems defined by (4) and (5) aresolved using a two stage optimization strategy, where in the first stage an initial vector x is determined and usedas an input to the second stage, a nonlinear program-ming algorithm. The techniques we investigated for theformer stage are random initialization, the Gerchberg-Saxton algorithm [29] and Semidefinite Relaxation (SDR)[30]. For the nonlinear programming stage, we studiedthe algorithms proposed by the optimization toolbox ofMatlab, i.e. the active-set (A-S), Sequential QuadraticProgramming (SQP), interior-point (I-P) algorithm, aswell as some variants proposed by the Knitro softwareallowing a user-supplied Hessian of the Lagrangian.
1) Initialization phaseRandom initialization
To increase our chances to find the global minimum, 500random vectors (uniform distribution) were generated andscaled to satisfy initially all constraints. These inputswere then fed to each different nonlinear programmingalgorithm. In the large FA case however, and when theHessian of the Lagrangian was user-supplied, only 50random vectors were tried due to the long execution time.
Gerchberg-Saxton
Also called variable exchange method [24], this algorithmis inspired from the alternating projection strategy [38]where the problem (1) is reformulated in a way thatdoes not involve the absolute value function by explicitlysplitting amplitude and phase contributions, diag ( θ ) and u , with u satisfying the unity modulus constraint | u i | = 1for i ∈ { , . . . , N } . The problem is thus written asmin k Ax − diag ( θ ) u k u ∈ C N , | u i | = 1 ,x ∈ C p , (6)where the optimization procedure is iterated over bothvariables u ∈ C N and x ∈ C p . For fixed u , x can be foundby solving the unconstrained linear least squares problem: x = A † diag ( θ ) u, (7)where A † is the Moore-Penrose pseudo-inverse of A . De-spite the crude approximation thereby made for the largeFA, where the relation between the pulse shape and theFA is nonlinear, it is an easily implementable and quicktechnique able to return a nonrandom and reasonableguess. The Gerchberg-Saxton (G-S) algorithm projects thecurrent diag ( θ ) u h on the image of A using the orthogonalprojector AA † and correspondingly updates its phase forthe new iterate (which is just equivalent to adjust the newphase to the one of Ax h ). This procedure is repeated untilsome convergence criteria are satisfied, e.g. when the costfunction varies less than by 1 % from one iteration to thenext. Nevertheless, and as the problem is nonconvex in u , this alternating projection method is known to stallin local minima when there are no constraints [30]. This method also requires an initial phase distribution in thevector u , which we took to be the one of the circularlypolarized mode obtained by aligning the phases of theRF field maps at the center of the brain. Finally, the G-Salgorithm the way we have presented it so far generatesonly one candidate x . To generate different candidates, wesolved instead the following problemmin k Ax − diag ( θ ) u k + λ k x k u ∈ C N , | u i | = 1 ,x ∈ C p , , (8)where λ is a regularization parameter which was variedlogarithmically from 1 to 300 and 10 000 for the smalland large FA targets respectively (500 steps). The rangein the latter case indeed was increased as it appeared itgenerated more variability in the returned input vectors.The G-S algorithm in this case is implemented in thesame way this time by using x = ˜ A λ diag ( θ ) u where˜ A λ = ( A H A + λ Id ) − A H [23]. Cross-correlations betweenthe corresponding results were calculated to assess theirsimilarities. Semidefinite Relaxation
The problem (8) can be expressed as a quadratic pro-gramming problem (QP) with variable u by inserting x =˜ A λ diag ( θ ) u in the same equation and setting the positivedefinite Hermitian matrix M = diag ( θ )( Id − A ˜ A λ ) diag ( θ )[30], QP ( M ) := min u H M uu ∈ C N , | u i | = 1 , i ∈ { , . . . , N } . (9)This problem is equivalent to a Semidefinite Program(SDP) in U = uu H ∈ H N : SDP ( M ) := min T r ( U M ) U ∈ H N ,diag ( U ) = 1 ,U (cid:23) , Rank ( U ) = 1 . (10)The Semidefinite Relaxation (SDR) is obtained by drop-ping the nonconvex rank constraint and is known to havetheoretical guarantees about the global minimum whenthere are no additional constraints. When the solutionhas rank one, the relaxation is tight and the vector u isan optimal solution to the problem defined by (9) [30].If the solution has rank larger than one, a normalizedleading eigenvector of U is used as a candidate solu-tion. In practice, the semidefinite programing solvers arerarely designed to handle complex matrices. Therefore thecomplex programs are often reformulated using the lineartransformation T that maps Hermitian complex matrices H N in to real semi-definite positive matrices in S N [39], T ( M ) = (cid:20) Re ( M ) − Im ( M ) Im ( M ) Re ( M ) (cid:21) . which is a block-coordinate descent method, to solvethis new problem. Because solving this problem usingthis approach took several hours given the size of thematrices, we tried only three different values of λ togenerate different initial guesses (1, 100, and 300 for thesmall FA and 1, 100 and 10 000 for the large FA). Thesecandidates likewise were then fed as initial starting pointsto the nonlinear programming algorithms we investigated.
2) Nonlinear programming
Some of the most successful large scale algorithms forgenerally constrained nonlinear optimization fall into oneof two categories [40]: active-set sequential quadratic pro-gramming methods and interior-point or barrier methods.Active-set methods can quickly generate a good work-ing set of active constraints, i.e. the ones that satisfythe constraint equalities in Problems (4) and (5), andthen perform a minimization on the smaller dimensionalsubspace generated by this set of linearized and activeconstraints which are iteratively updated. The constraintsthat are estimated to be nonactive at the solution point aresimply disregarded. Interior-point methods on the otherhand always attempt enforcing all the constraints by usingbarrier penalty functions which progressively vanish asthe number of iterations increases. The efficiency andthe scaling of these schemes with respect to the size ofthe problem heavily depend on the particular problem ofinterest [41]. We now briefly detail these methods.
Active-set methods
They consist of a (Quasi-)Newton procedure where ateach step the problem is approximated by a QP withthe constraints linearized [41]. Let QP ( x h , H h ) denote thefollowing problem:min d H H h d + ∇ f ( x h ) H dd ∈ C p ,c i ( x h ) + ∇ c i ( x h ) H d ≤
10 W / Kg , i ∈ { , . . . , N V OP s } ,c G ( x h ) + ∇ c G ( x h ) H d ≤ . / Kg ,c pw,k ( x h ) + ∇ c pw,k ( x h ) H d ≤
10 W , k ∈ { , . . . , N c } ,c A,j ( x h ) + ∇ c A,j ( x h ) H d ≤ , j ∈ { , . . . , N c × N k T } , (12)where d = x − x h is the unknown step vector to takeat the h th iteration and H h is the Hessian of the La-grangian updated in the Matlab implementation via theBroyden-Fletcher-Goldfarb-Shanno (BFGS) method [41].The basic principle of the A-S algorithm is to solveproblem (12) and estimate the active constraints via acalculation of the Lagrange multipliers. At each iterationthe constraints believed to be nonactive at the solutionpoint are simply disregarded (but can be reincorporatedat a later iteration). For those that are active, the linearconstraints above allow to find a subspace tangent tothese in which a search direction is determined. The SQPimplementation of Matlab is similar to A-S [42], the main differences being: the strict feasibility with respect tobounds, this is beneficial when the objective function orthe nonlinear constraints functions are undefined or arecomplex outside the region constrained by the bounds;feasibility routines are reformulated; SQP algorithm ismore robust to non-double results, hence it tries to ensurenumerical convergence; the linear algebra routines arerefactored in order to be more efficient in memory usageand speed. Interior-Point Methods
The I-P approach consists of approximating the problemby a sequence of equality constrained problems that areeasier to solve than the original inequality-constrainedproblem [40]. It reads:min f ( x ) − µ k P l ∈J log( s l ) c i ( x ) + s i = 10 W / Kg , i ∈ { , . . . , N V OP s } ,c G ( x ) + s G = 3 . / Kg ,c pw,k ( x ) + s pw,k = 10 W , k ∈ { , . . . , N c } ,c A,j ( x ) + s A,k d = 1 , j ∈ { , . . . , N c × N k T } ,s l ≥ , l ∈ J , (13)where J is the set of constraints, s is a vector of slackvariables and µ k > k → + ∞ µ k = 0. There are two mainclasses of I-P methods [40]. The first class can be viewed asdirect extensions of I-P methods for linear and quadraticprogramming. They use line searches to enforce conver-gence, computing the descent steps by solving a systemof equations corresponding to the Karush-Kuhn-Tucker[40] conditions; they are also called I-P with direct step.The methods in the second class use a quadratic model todefine the step and incorporate a trust-region constraint toprovide stability; they are also called I-P with conjugategradient (CG) step [40]. Matlab exploits both classes inits implementation. In both I-P and A-S methods, thenumber of iterations, i.e. the number of times problem (12)or problem (13) was solved, was set equal to 60 which wasenough in this particular problem to reach convergencetowards a local minimum. C. Implementation1) Calculation of the objective function, the SAR con-straints and their respective gradients.
Given a candidate solution x the objective function in thesmall tip angle regime is calculated easily by using (4),where most of the computation consists of multiplying thematrix A with the vector x and taking an l -norm. Becausemost optimization algorithms do not deal directly withcomplex variables, we first split the vector x into two parts: (cid:20) Re ( x ) Im ( x ) (cid:21) . The gradient ∇ f (cid:18) Re ( x ) Im ( x ) (cid:19) could be obtained byblind finite difference with ∂f∂x j ( x ) ’ f ( x + (cid:15)e j ) − f ( x ) (cid:15) ,where e j is an elementary vector (1 at one position andzero elsewhere) but this would imply calculating the same kind of matrix-vector product for each degree of freedom,i.e. 2 × N c × N kT times (here 80 for the small FA pulse).By denoting a j the j th column of the matrix A , instead bylinearity one has A ( x + (cid:15)e j ) = Ax + (cid:15)a j , for the first half ofthe gradient vector and Ax + i(cid:15)a j − p , for the second half,where i = √−
1. The product Ax thereby is performedonly once. This saves many unnecessary calculations. Atthe end of the procedure, the normalized root mean squareerror (NRMSE) s N N P i =1 ( θ i − θ ) /θ is provided. In thelarge tip angle regime, the objective function is calculatedby using (5), where most of the computation consists ofcarrying a full Bloch simulation over all voxels and takingan l -norm. Its gradient in this case is evaluated via finitedifferences. The evaluation of the objective function, thusa crucial step to make this approach feasible in routine,was performed by using CUDA and a GPU card.For one k T -point, the SAR for one VOP for a pulseshape x is equal to c V OP ( x ) = x H Q V OP x . For N k T k T -points back to back, the result conveniently becomes c V OP ( x ) = x H ( Id N kT ⊗ Q V OP ) x thanks to the ordering ofthe elements in the A matrix, where Id N kT is the Identitymatrix of size N k T and ⊗ is the Kronecker product. If N k T is not small, the matrix sandwiched between the x vectors is sparse and can be declared as such to speed upthe computations. Most importantly, the gradient of theSAR hence has the following analytical expression: ∇ c V OP (cid:18) Re ( x ) Im ( x ) (cid:19) = 2 (cid:20) Re (( Id N kT ⊗ Q V OP ) x ) Im (( Id N kT ⊗ Q V OP ) x ) (cid:21) . Although the VOPs considerably reduce the size of theproblem, doing a loop over all of them to calculate theSAR would be suboptimal, especially for less compressedmodels. If I c is the matrix whose columns are the N T -time point waveforms of the different channels, for 100% of duty cycle the SAR at one spatial location can becalculated using the following formula [43]: SAR = N c X m =1 N c X n =1 Q m,n I Hc,m I c,n , (14)= T (cid:18) Q (cid:12) (cid:18) N T I Hc I c (cid:19)(cid:19) , (15)where T = [1 , . . . , ∈ R N c , the operator on the righthand side denotes the Hadamard product and the bardenotes a time average. For one voxel, this formula alreadysaves looping over the different time points. To calculatethe SAR value over all the VOPs, we simply compute: SAR = T Q cat (cid:12) I Hc I c , . . . , I Hc I c | {z } N V OPs times . . . . . . ...0 . . . , (16)where Q cat = [ Q , . . . , Q N V OPs ]. The matrix on the farright contains N V OP s × N c rows and should be declaredas sparse (1.6 % of the elements are nonzeros in our case).Each column of that matrix contains 8 consecutive oneswhich isolate a Q matrix. This way, all SAR values are computed in one shot with optimized algebra routines andwith no for loops , often time-consuming in Matlab imple-mentations. All matrices besides I Hc I c are built once andfor all throughout. Finally, because analytical formulaslikewise exist for the gradients of the peak and averagepower constraints, they can be calculated very efficientlyas well.
2) MSLS problem and the Hessian of the Lagrangian.
Since the objective function defined by (1) is not ev-erywhere differentiable, gradient-based methods are notwell defined and can cause problems. For that reason weattempted to solve the original MLS problem by usinga variant that is different but closely related, i.e. theMagnitude Squared Least Squares (MSLS) problem [24],still under the same constraints as in the MLS problems(4) and (5): min x ∈ C p f ( x ) := k| g ( x ) | − θ k , (17)where g ( x ) = Ax and g ( x ) = bl ( x ) for the small and largetip angle regimes respectively. In the small FA regime thegradient and the Hessian of the Lagrangian are determinedanalytically [24] and supplied at moderate cost to theKnitro solver (Matlab does not accept a user-suppliedHessian for the A-S and I-P methods). Like for SDP, inorder to work with real variables, the linear transformationdefined by (11) is applied to A , M = T ( A ) and setting M j = [ col j ( M ) , col p + j ( M )], the gradient and the Hessianof the objective function are [24] ∇ f ( x ) = 4 N X i =1 ( x H M i M Hi x − θ ) M i M Hi x and ∇ f ( x ) =4 N X i =1 M i M Hi xx H M i M Hi + ( x H M i M Hi − θ ) M i M Hi respectively. For the large tip angle regime the Hessianof the function, ∇ f (cid:18) Re ( x ) Im ( x ) (cid:19) , was obtained by blindfinite differences. Second derivatives for the SAR andpower constraints were trivially added to this contributionto return the Hessian of the Lagrangian. Based on thisreformulation we investigated likewise the A-S and I-Pmethods. Knitro on the other hand provides the userwith the choice of using either the direct or CG approachfor the latter, so that both were tried. Despite the useof a GPU to calculate the objective function and thefinite differences in the large tip angle regime, calculatingexplicitly the Hessian significantly increased the numericalburden compared to the much faster, but approximate,BFGS update method. However the goal in using theMSLS problem reformulation was to investigate potentialgains in the returned RF pulse performance (NRMSE)by exploiting full knowledge of the Hessian matrix ofthe Lagrangian. Because of the longer duration of this implementation (around 20 minutes per run), only 50random and 50 G-S initializations were performed inthe large FA case (as opposed to 500 for the smallFA). Semidefinite relaxation, as presented above, wasalso used to supply three different initial guesses to theKnitro solver. Knowledge of the Hessian allowed fasterconvergence in terms of number of iterations, which wastherefore set to 30.
3) Summary of the different approaches.
We summarize the different schemes investigated in thispaper for the reader’s convenience. Initializations wereperformed using: random draws, the G-S algorithm andSDR. The problem of interest is the MLS problem definedin (4) and (5). Both A-S and I-P methods were attemptedusing a Hessian of the Lagrangian updated by the BFGSmethod for Matlab and a user-supplied one for Knitro(determined analytically for the small FA problem and viafinite differences for the large FA one). The MSLS variantwas used in the latter case.
IV. Results
The cross-correlation among the different input vectorsvaried between 0.5 and 1 for the random draws, while theone among the vectors generated by the G-S algorithmvaried between 0.65 and 1, which indicates a significantvariability in the input states. Fig.2 provides the box plotsshowing the variability of the final NRMSEs with the ran-dom initialization procedure combined with the differentalgorithms. In the small FA regime one can observe asmaller variability within the 1.5 interquartile range forthe A-S methods compared to the I-P ones, except forthe I-P direct approach in the MSLS formulation. Theresult however is opposite for the high FA problem witha higher robustness achieved for the I-P method in theoriginal MLS formulation. (a) Low flip angle regime (b) High flip angle regimeFigure 2. Box plots obtained for the small (a) and the large tip angleregimes (b). The small horizontal lines correspond to the lowest andhighest data within 1.5 interquartile range below the first and abovethe third quartiles respectively. The crosses are the outliers. The plotswere generated with 500 random samples for all approaches exceptfor the MSLS variant in the large FA regime (50 random samples).
Table I summarizes the best NRMSEs obtained forthe small and large FAs and for each algorithm, alongwith their execution times (not including the initializationroutine which, besides the SDR method, takes negligible time). The constraints before and after each algorithmare also reported in the second table, the initializationsbeing the ones leading to their best respective results.The SDR initialization method returned matrices whoseranks were always far larger than 1 so that no statementsabout global optimality (without constraints) could bemade. The differences between the A-S and SQP im-plementations of Matlab were minor. The SQP programwas slightly slower but guaranteed at the end the strictrespect of all the constraints. The A-S technique couldon the other hand return a result where the constraintscould be, very slightly, violated. Execution time naturallyvaried significantly depending on the evaluation methodof the Hessian (analytical, BFGS or finite differences), theevaluation of the objective function and its gradient (STA,Bloch simulation, finite differences), the algorithm and itsstarting point. Solving the MSLS problem with Knitro inthe large FA regime for instance takes nearly 20 minutesdue to the evaluation of the Hessian via finite differences,even when using a GPU card. Solving the MLS problemusing the A-S approach and the BFGS update method onthe other hand takes 6 and 47 seconds for the small andlarge FAs respectively.The best NRMSEs found therefore are 5.65 and 8.72% for the 30 ◦ and 180 ◦ target FAs respectively. With atolerance of 0.3 % for the NRMSE, when starting from arandom initial guess the probabilities to converge to theseresults were for the different algorithms respectively 84, 84,0, 86, 88, and 69 % (same order as in table I) for the smallFA target, indicating a high robustness of the algorithmswith a cold start input, except for the I-P implementationof Matlab. The numbers are even more encouraging whenthe initializations are performed via the G-S algorithm,indicating a very high robustness for these algorithms,especially in the A-S and SQP implementations, when awarm start is provided. These numbers are 100, 100, 0,90, 91, and 60 %. The A-S and SQP methods thus areseen to be very robust with a cross-correlation betweenthe final output solutions varying between 0.98 and 1. Asfar as the large FA target is concerned, the results indicate,perhaps not surprisingly, a higher sensitivity with respectto the initial starting point when setting a tolerance of2 % on the NRMSE, this time the I-P method beingthe most robust (see Fig. 2(b)). With that tolerance,the probabilities were 70, 75, 84, 36, 66 and 28 % forthe random initializations; and 62, 61, 83, 68, 86 and 30% for the G-S initialization. The I-P (MLS) approachas a result seems more robust for the large flip anglecase but yields slightly worse NRMSE on average thanthe A-S and SQP methods (see Fig. 2(b)), and can takealmost twice longer to execute (see Fig. 3(b)). Moreover,for the G-S initializations it is worth pointing that assoon as the Tikhonov parameter exceeded the value of1000, all MLS methods converged towards their respectiveminimum with 100 % probability. The A-S or SQP methodcombined with a G-S initialization and a strong powerregularization hence still appears the method of choicein the large FA regime. Despite the differences between Table I
Best NRMSE in % and execution time in seconds (in parenthesis) obtained for each initialization/algorithm combinationfor the small FA (SFA) and large FA (LFA) regimes. (cid:104)(cid:104)(cid:104)(cid:104)(cid:104)(cid:104)(cid:104)(cid:104)(cid:104)(cid:104)(cid:104)
Initialization Algorithm MLS (Matlab) MSLS (Knitro)A-S SQP I-P direct & CG A-S I-P direct I-P CG S F A Random 5.67 (6) 5.67 (7) 6.70 (9) 5.68 (40) 5.68 (35) 5.73 (34)G-S 5.70 (6) 5.71 (8) 7.24 (8) 5.68 (47) 5.73 (33) 5.69 (33)SDR (8) 5.65 (9) 12.67 (11) 5.77 (40) 6.42 (37) 10.72 (36) L F A Random 8.94 (49) 9.00 (78) 10.22 (44) 8.83 (1 369) 8.76 (1 124) 9.07 (1 126)G-S 8.90 (47) 8.91 (77) 10.20 (38) (1 351) 8.74 (1 119) 8.83 (1 111)SDR 9.42 (49) 9.49 (48) 10.34 (44) 9.57 (1 357) 9.03 (1 116) 12.54 (1 117) the original MLS problem and the MSLS reformulation,the numerical experiments performed indicate (but do notprove) that the exact knowledge of the Hessian is notrequired and that much faster, but approximate, methodssuch as the BFGS-update method return equally goodresults. To illustrate convergence speed, we calculated theNRMSE versus CPU time for each algorithm and byselecting the initial starting point which both verifies theconstraints and leads to the best NRMSE achieved in allruns. (a) Low flip angle regime (b) High flip angle regimeFigure 3. NRMSE versus CPU time for the different algorithmsin the small (a) and large (b) FA regimes. The plots for the MSLSvariant in the large FA regime are not included due to the time-consuming calculation of the Hessian of the Lagrangian, resulting ina much longer execution time ( ∼
20 min). In the small FA regimethe plots corresponding to the SQP and A-S algorithms are almostindistinguishable. Here, for all algorithms, the initial starting pointwas chosen based on two criteria: feasibility with respect to allconstraints and best NRMSE obtained among all numerical trialsperformed in this study. This best NRMSE was obtained with arandom input vector, thus yielding initially a high cost function.
While it seemed not critical for the A-S methods tostart with a feasible point to achieve a very good result,the theory behind I-P methods assumes such a startingpoint. Results are indicated in Figure 3 both for thesmall and large FA problems. The corresponding FAmaps over the 3 dimensional brain are calculated viaa Bloch simulation and are provided in Figure 4, alsowith the relative saturation of the different constraintsindicated in bar graphs (i.e. how far the returned valuesare from their respective limits). In our case, it seemedthat average power and peak amplitude were the mostcritical constraints for the small FA design, whereas themost critical constraint in the large FA regime was by farthe amplitude.
V. Discussion
A practical evaluation of optimization algorithms iscomplicated by details of implementation, heuristics andalgorithmic options. In this paper, it is worth stressing thatthe numerical experiments we have carried out made use ofdifferent algorithms provided by different solvers (Matlab,Knitro) with their default options. Furthermore we foundthat variants of the I-P methods (CG and direct) couldbehave very differently for the MSLS problem. In thosesame methods, the evolution of the barrier parameter µ with respect to the number of iterations likewise can havea great impact on the final result [21]. Therefore it isnot excluded that slightly different implementations of thesame class of algorithms could lead to better results. Ourgoal here however was to provide a readily implementablesolution. Surprisingly, it was also found that the BFGSapproximation of the Hessian of the Lagrangian led toequally good results as the ones returned when the Hessianwas fully calculated, however for the MSLS variant. Thisis a useful result which allows to significantly speed up theimplementation at no performance expense.Among all algorithms and initializations we have tested,our recommendation depends on the tolerance one mayhave on the satisfaction of the constraints. Both A-Sand SQP implementations are robust with respect to theinitialization in the small FA regime, as long as the inputis plausible (generated for instance quickly by the G-Salgorithm). In the large FA regime, the best performancewas obtained with the A-S method but this time a smallerrobustness with respect to the input state was observed.Initializations performed with the G-S method and a highregularization parameter returned the best result in areliable manner. Whereas A-S is a bit faster than SQP,the constraints may be slightly violated at the end (bya few % at the most). Although this can be acceptablefor instance for the SAR or average power constraints,this solution would not be accepted by the MRI scannerif the peak power limit was exceeded. This however canbe checked a posteriori and corrected by truncating orrenormalizing the waveforms, provided the performanceis not too much affected. The SQP algorithm on theother hand should return a solution leading to a strictnonviolation of the constraints. For the 3D k T -pointstests we have studied, the gain in computation time forthe A-S method versus SQP seemed certainly worth the (a) Low flip angle regime (b) High flip angle regimeFigure 4. Flip-angle maps obtained for the small (a) and the large tip angle regimes (b). The bar graphs on the left side of the mapsindicate in a percentile way how far from their limits the constraints are ("A" is for amplitude, "M" for the maximum average power amongthe different channels, "L" for local SAR and "G" for global SAR). The numbers in the left of the FA maps correspond to the NRMSE.Table II Peak power (A) (without units), maximum average power (P) (W), maximum 10-g SAR (L) (W/kg) and global SAR (G)(W/kg) corresponding to Table I before and after each algorithm. The initializations here correspond to the ones leadingto the lowest NRMSE for each algorithm.
After initialization After algorithmA P L G A P L G S F A A-S (MLS) 0.81 10.17 5.15 0.85 0.94 10.00 6.95 1.55SQP (MLS) 0.81 10.29 5.19 0.85 0.94 10.00 6.97 1.55I-P Direct & CG (MLS) 0.57 5.51 2.60 0.49 0.66 7.44 2.46 0.47A-S (MSLS) 0.37 2.81 0.95 0.22 0.94 10.00 7.67 1.73I-P Direct (MSLS) 0.50 4.34 1.71 0.38 0.93 10.00 6.65 1.60I-P CG (MSLS) 0.84 11.82 5.78 0.93 0.93 10.00 7.05 1.66 L F A A-S (MLS) 0.72 0.06 0.10 0.03 1.00 0.19 0.42 0.10SQP (MLS) 0.83 0.07 0.12 0.03 1.00 0.19 0.42 0.10I-P Direct & CG (MLS) 0.81 0.07 0.11 0.03 0.93 0.13 0.26 0.06A-S (MSLS) 0.50 0.03 0.05 0.01 1.00 0.19 0.38 0.10I-P Direct (MSLS) 0.65 0.05 0.08 0.02 1.00 0.18 0.38 0.10I-P CG (MSLS) 0.82 0.07 0.12 0.03 1.00 0.18 0.35 0.09 possible post-correction on the waveforms, which wouldtake negligible time. Despite the good results obtainedin [21], our results also seem to indicate that it is moreefficient to tackle the MLS problem under constraintsdirectly rather than looping over constrained least-squaresproblems where the phase of the target FA is updated ateach iteration [21].For the small FA pulse, the best performance we couldfind was 5.65 %, which is comparable to our previousresults [25], [9] except this time no tuning of parameterswas required to enforce all constraints. For the inversionpulse, the best performance obtained was 8.72 % using theA-S algorithm, the MSLS variant, and a 3.5 ms k T -pointspulse, but in almost 20 minutes. A slightly higher NRMSEof 8.90 % on the other hand could be obtained in 47seconds still using the A-S technique on the MLS problem.This is worse than the 6 % inhomogeneity we had obtainedusing an optimal control approach, a pulse duration of5.9 ms and in around 3 minutes [9]. Considering howeverthe much smaller number of degrees of freedom we usedhere (112 versus 94 000), the substantially shorter pulseduration and the fact that the SAR constraints this timewere directly incorporated, this result is quite remarkable and is likely due to the use of second order methodscompared to gradient descent approaches [9]. Furthermore,note that 8.90 % inhomogeneity is significantly better thanwhat is typically measured at 3 Tesla using a birdcagecoil ( ∼
12 %) [44]. This performance and efficiency todesign such pulses thus allows to mitigate the RF inhomo-geneity problem at UHF in standard T -weighted imagingsequences such as the MPRAGE in very reasonable timeand thus in routine. VI. Conclusion
In this paper we have investigated several initializationsand nonlinear programming methods to solve the MLSproblem in RF pulse design utilizing parallel transmissionat UHF, under strict SAR and hardware constraints, bothin the small (linear) and large (nonlinear) FA regimes. Ourfinal recommendation in both cases is the use of an A-S(or SQP) method combined with a G-S initialization withstrong power regularization to yield the lowest NRMSE inthe most efficient way. Moreover, the combination of thesetechniques leads to an execution time that is sufficientlysmall for the approach to be implemented in routine,although the use of parallel computing devices (GPUs in our case) seems at this point a necessity for the design oflarge FA pulses. References [1] V. de Moortele, C. Akgun, G. Adriany, S. Moeller, J. Ritter,C. M. Collins, M. B. Smith, J. T. Vaughan, K. Uğurbil et al. ,“B1 destructive interferences and spatial phase patterns at 7Twith a head transceiver array coil,”
Magn Reson Med , vol. 54,no. 6, pp. 1503–1518, 2005.[2] J. Moore, M. Jankiewicz, A. W. Anderson, and J. C. Gore,“Evaluation of non-selective refocusing pulses for 7T MRI,”
JMagn Reson , vol. 214, pp. 212–220, 2012.[3] N. Boulant, J.-F. Mangin, and A. Amadon, “Counteractingradio frequency inhomogeneity in the human brain at 7 Teslausing strongly modulating pulses,”
Magn Reson Med , vol. 61,no. 5, pp. 1165–1172, 2009.[4] G. Adriany, V. de Moortele, F. Wiesinger, S. Moeller, J. P.Strupp, P. Andersen, C. Snyder, X. Zhang, W. Chen, K. P.Pruessmann et al. , “Transmit and receive transmission linearrays for 7 Tesla parallel imaging,”
Magn Reson Med , vol. 53,no. 2, pp. 434–445, 2005.[5] U. Katscher, P. Börnert, C. Leussler, and J. S. van den Brink,“Transmit sense,”
Magn Reson Med , vol. 49, no. 1, pp. 144–150,2003.[6] Y. Zhu, “Parallel excitation with an array of transmit coils,”
Magn Reson Med , vol. 51, no. 4, pp. 775–784, 2004.[7] W. Grissom, C.-y. Yip, Z. Zhang, V. A. Stenger, J. A. Fessler,and D. C. Noll, “Spatial domain method for the design ofRF pulses in multicoil parallel excitation,”
Magn Reson Med ,vol. 56, no. 3, pp. 620–629, 2006.[8] J. Ellermann, U. Goerke, P. Morgan, K. Ugurbil, J. Tian,S. Schmitter, T. Vaughan, and P.-F. Van De Moortele,“Simultaneous bilateral hip joint imaging at 7 Tesla using fasttransmit B shimming methods and multichannel transmission–a feasibility study,” NMR Biomed , vol. 25, no. 10, pp. 1202–1208, 2012.[9] M. Cloos, N. Boulant, M. Luong, G. Ferrand, E. Giacomini, M.-F. Hang, C. J. Wiggins, D. Le Bihan, and A. Amadon, “Parallel-transmission-enabled magnetization-prepared rapid gradient-echo T1-weighted imaging of the human brain at 7T,”
Neuroimage , vol. 62, no. 3, pp. 2140–2150, 2012.[10] A. Massire, M. A. Cloos, A. Vignaud, D. Le Bihan, A. Amadon,and N. Boulant, “Design of non-selective refocusing pulses withphase-free rotation axis by gradient ascent pulse engineeringalgorithm in parallel transmission at 7T,”
J Magn Reson , vol.230, pp. 76–83, 2013.[11] IEC,
International standard, medical electrical equipment -part2: particular requirements for the safety of Magnetic Resonanceequipment for medical diagnosis , 3rd ed., 2011.[12] A. Zelinski, V. Goyal, L. Angelone, G. Bonmassar, L. Wald, andE. Adalsteinsson, “Designing RF pulses with optimal specificabsorption rate (SAR) characteristics and exploring excitationfidelity, SAR and pulse duration tradeoffs,” in
Proc. of the 15thAnnual Meeting of ISMRM , 2007, p. 1699.[13] A. Sbrizzi, H. Hoogduin, J. J. Lagendijk, P. Luijten, G. L.Sleijpen, and C. A. van den Berg, “Time efficient design ofmulti dimensional RF pulses: application of a multi shift CGLSalgorithm,”
Magn Reson Med , vol. 66, no. 3, pp. 879–885, 2011.[14] C. M. Deniz, L. Alon, R. Brown, D. K. Sodickson, and Y. Zhu,“Specific absorption rate benefits of including measured electricfield interactions in parallel excitation pulse design,”
MagnReson Med , vol. 67, no. 1, pp. 164–174, 2012.[15] M. A. Cloos, M. Luong, G. Ferrand, A. Amadon, D. Le Bihan,and N. Boulant, “Local SAR reduction in parallel excitationbased on channel-dependent Tikhonov parameters,”
J MagnReson Imaging , vol. 32, no. 5, pp. 1209–1216, 2010.[16] A. Sbrizzi, H. Hoogduin, J. J. Lagendijk, P. Luijten, G. L.Sleijpen, and C. A. van den Berg, “Fast design of local N-gram-specific absorption rate–optimized radiofrequency pulsesfor parallel transmit systems,”
Magn Reson Med , vol. 67, no. 3,pp. 824–834, 2012.[17] I. Graesslin, F. Schweser, B. Annighoefer, S. Biederer,U. Katscher, K. Nehrke, H. Stahl, H. Dingemans, G. Mens, andP. Bornert, “A minimum SAR RF pulse design approach forparallel Tx with local hot spot suppression and exact fidelity constraint,” in
Proc. of the 16th Annual Meeting ISMRM ,Toronto, Canada, 2008, p. 621.[18] D. O. Brunner and K. P. Pruessmann, “Optimal designof multiple-channel RF pulses under strict power and SARconstraints,”
Magn Reson Med , vol. 63, no. 5, pp. 1280–1291,2010.[19] G. Eichfelder and M. Gebhardt, “Local specific absorption ratecontrol for parallel transmission by virtual observation points,”
Magn Reson Med , vol. 66, no. 5, pp. 1468–1476, 2011.[20] J. Lee, M. Gebhardt, L. L. Wald, and E. Adalsteinsson, “LocalSAR in parallel transmission pulse design,”
Magn Reson Med ,vol. 67, no. 6, pp. 1566–1578, 2012.[21] B. Guérin, M. Gebhardt, S. Cauley, E. Adalsteinsson, andL. L. Wald, “Local specific absorption rate (SAR), global SAR,transmitter power, and excitation accuracy trade-offs in low flip-angle parallel transmit pulse design,”
Magn Reson Med , 2013.[22] A. Kerr, Y. Zhu, and J. Pauly, “Phase constraint relaxation inparallel excitation pulse design,” in
Proc. of the 15th AnnualMeeting of ISMRM , 2007, p. 1694.[23] K. Setsompop, L. Wald, V. Alagappan, B. Gagoski, andE. Adalsteinsson, “Magnitude least squares optimization forparallel radio frequency excitation design demonstrated at 7Tesla with eight channels,”
Magn Reson Med , vol. 59, no. 4,pp. 908–915, 2008.[24] P. W. Kassakian, “Convex approximation and optimizationwith applications in magnitude filter design and radiationpattern synthesis,” Ph.D. dissertation, University of California,Berkeley, 2006.[25] M. Cloos, N. Boulant, M. Luong, G. Ferrand, E. Giacomini,D. Le Bihan, and A. Amadon, “kT-points: Short three-dimensional tailored RF pulses for flip-angle homogenizationover an extended volume,”
Magn Reson Med , vol. 67, no. 1, pp.72–80, 2012.[26] R. W. Harrison, “Phase problem in crystallography,”
JOSA A ,vol. 10, no. 5, pp. 1046–1055, 1993.[27] O. Bunk, A. Diaz, F. Pfeiffer, C. David, B. Schmitt, D. K.Satapathy, and J. Veen, “Diffractive imaging for periodic sam-ples: retrieving one-dimensional concentration profiles acrossmicrofluidic channels,”
Acta Crystallographica Section A:Foundations of Crystallography , vol. 63, no. 4, pp. 306–314,2007.[28] J. Miao, T. Ishikawa, Q. Shen, and T. Earnest, “Extending x-raycrystallography to allow the imaging of noncrystalline materials,cells, and single protein complexes,”
Annu Rev Phys Chem ,vol. 59, pp. 387–410, 2008.[29] R. Gerchberg and W. Saxton, “A practical algorithm forthe determination of phase from image and diffraction planepictures,”
Optik , vol. 35, p. 237, 1972.[30] I. Waldspurger, A. d’Aspremont, and S. Mallat, “Phaserecovery, maxcut and complex semidefinite programming,” arXiv preprint arXiv:1206.0102 , 2012.[31] T.-H. Chang, Z.-Q. Luo, C. Akgun, T. Vaughan, K. Ugurbil, andP.-F. Van de Moortele, “Transmit b1 shimming at high field withsar constraints: A two stage optimization method independentof the initial set of rf phases and amplitudes,” in
Proc. of the16th Annual Meeting ISMRM , Toronto, Canada, 2008, p. 1088.[32] J. Pauly, D. Nishimura, and A. Macovski, “A k-space analysisof small-tip-angle excitation,”
J Magn Reson , vol. 81, no. 1, pp.43–56, 1989.[33] N. Boulant and D. I. Hoult, “High tip angle approximationbased on a modified Bloch–Riccati equation,”
Magn Reson Med ,vol. 67, no. 2, pp. 339–343, 2012.[34] A. Massire, M. A. Cloos, M. Luong, A. Amadon, A. Vignaud,C. J. Wiggins, and N. Boulant, “Thermal simulations in thehuman head for high field MRI using parallel transmission,”
JMagn Reson Imaging , vol. 35, no. 6, pp. 1312–1321, 2012.[35] N. Makris, L. Angelone, S. Tulloch, S. Sorg, J. Kaiser,D. Kennedy, and G. Bonmassar, “MRI-based anatomical modelof the human head for specific absorption rate mapping,”
MediBiol Eng Comput , vol. 46, no. 12, pp. 1239–1251, 2008.[36] P. Van de Moortele, C. Snyder, L. DelaBarre, G. Adriany,J. Vaughan, and K. Ugurbil, “Calibration tools for RF shim atvery high field with multiple element RF coils: from ultra fastlocal relative phase to absolute magnitude B1+ mapping,” in
Proc. of the 15th Annual Meeting of ISMRM , Berlin, Germany,2007, p. 1676. [37] V. L. Yarnykh, “Actual flip-angle imaging in the pulsed steadystate: A method for rapid three-dimensional mapping of thetransmitted radiofrequency field,” Magn Reson Med , vol. 57,no. 1, pp. 192–200, 2007.[38] H. H. Bauschke and P. L. Combettes,
Convex analysis andmonotone operator theory in Hilbert spaces . Springer, 2011.[39] M. X. Goemans and D. Williamson, “Approximation algorithmsfor MAX-3-CUT and other problems via complex semidefiniteprogramming,” in
Proc. of the thirty-third annual ACMsymposium on Theory of computing . ACM, 2001, pp. 443–452.[40] J. Nocedal and S. Wright,
Numerical Optimization , 2nd ed.New York: Springer, 2006.[41] L. Hei, J. Nocedal, and R. A. Waltz, “A numerical study ofactive-set and interior-point methods for bound constrainedoptimization,” in
Modeling, Simulation and Optimization ofComplex Processes . Springer, 2008, pp. 273–292.[42] S. Han, “A globally convergent method for nonlinear program-ming,”
JOTA , vol. 22, no. 3, pp. 297–309, 1977.[43] I. Graesslin, H. Homann, S. Biederer, P. Börnert, K. Nehrke,P. Vernickel, G. Mens, P. Harvey, and U. Katscher, “A specificabsorption rate prediction concept for parallel transmissionMR,”
Magn Reson Med , vol. 68, no. 5, pp. 1664–1674, 2012.[44] N. Boulant, D. L. Bihan, and A. Amadon, “Strongly modulatingpulses for counteracting RF inhomogeneity at high fields,”