Higher Strong Order Methods for Itô SDEs on Matrix Lie Groups
Michelle Muniz, Matthias Ehrhardt, Michael Günther, Renate Winkler
aa r X i v : . [ m a t h . NA ] F e b BIT manuscript No. (will be inserted by the editor)
Higher Strong Order Methods for It ˆo SDEs on Matrix LieGroups
Michelle Muniz · Matthias Ehrhardt · MichaelG ¨unther · Renate Winkler
Received: date / Accepted: date
Abstract
In this paper we present a general procedure for designing higher strongorder methods for Itˆo stochastic differential equations on matrix Lie groups and illus-trate this strategy with two novel schemes that have a strong convergence order of 1.5.Based on the Runge-Kutta–Munthe-Kaas (RKMK) method for ordinary differentialequations on Lie groups, we present a stochastic version of this scheme and derivea condition such that the stochastic RKMK has the same strong convergence orderas the underlying stochastic Runge-Kutta method. Further, we show how our higherorder schemes can be applied in a mechanical engineering as well as in a financialmathematics setting.
Keywords
Lie group methods · stochastic Runge-Kutta methods · geometricintegration Mathematics Subject Classification (2010) · · In recent years, more and more interrelations in mechanics and finance have beenmodeled by stochastic differential equations (SDEs) on Lie groups. A trend can beobserved that shows that kinematic models, which were previously expressed by ordi-nary differential equations (ODEs), are now extended by terms that include stochas-tic processes in order to include possible stochastic perturbations. Examples can befound in the modeling of rigid bodies like satellites, vehicles and robots [5,6,15,28].Furthermore, SDEs on Lie groups are also considered in the estimation of object mo-tion from a sequence of projections [25] and in the representation of the precessionalmotion of magnetization in a solid [1].
Chair of Applied Mathematics / Numerical Analysis,Faculty of Mathematics and Natural Sciences,Bergische Universit¨at Wuppertal,Gaußstrasse 20, 42119 Wuppertal, GermanyE-mail: { muniz,ehrhardt,guenther,winkler } @uni-wuppertal.de Michelle Muniz et al. In financial mathematics, the consideration of stochastic processes is essential,and the solution of SDEs has been performed for many years, but usually not on Liegroups. However, the use of Lie groups to solve existing or to create new financialmodels could be of central importance for dealing with geometric constraints. We areconfronted with geometric constraints, e.g. in the form of a positivity constraint oninterest rates [22,13,27] or a symmetry and positivity constraint on covariance andcorrelation matrices [19], which are important e.g. in risk management and portfoliooptimization.Despite these diversified applications, the available literature on analysis and nu-merical methods for SDEs on Lie groups is limited, in contrast to the available litera-ture on ODEs on Lie groups (e.g. [20,21,7,8,10,4]). Furthermore, the available liter-ature on Lie group SDEs mainly concerns Stratonovich SDEs [3,15,1,27]. Readersinterested in Itˆo SDEs on Lie groups will only find the geometric Euler-Maruyama scheme with strong order γ = Magnus expansion we apply Itˆo-Taylor schemes or stochasticRunge-Kutta (SRK) schemes to solve a corresponding SDE in the Lie algebra. Usinga SRK method can be interpreted as a stochastic version of Runge-Kutta–Munthe-Kaas (RKMK) methods. Under these circumstances, we derive a condition such thatthe stochastic RKMK scheme inherits the strong convergence order γ of the SRKmethod applied in the Lie algebra.The remainder of the paper is organized as follows. We start with an introductionto matrix Lie groups, their corresponding Lie algebras and the linear Itˆo matrix SDEwhich we consider in this geometric setting in Section 2. In Section 3 we take a closerlook on how SDEs on Lie groups can be solved numerically and present our higherstrong order methods. Then we provide some numerical and application examples inSection 4. A conclusion of our results is given in Section 5. A Lie group is a differentiable manifold, which is also a group G with a differentiableproduct that maps G × G → G . Matrix Lie groups are Lie groups, which are alsosubgroups of GL( n ) for n ∈ N . The tangent space at the identity of a matrix Lie group G is called Lie algebra g . The Lie algebra is closed under forming Lie brackets [ · , · ] (also called commutators) of its elements. For further details on Lie groups and Liealgebras we refer the interested reader to [9].On a matrix Lie group G we consider the linear matrix-valued Itˆo SDE dQ t = Q t K t dt + Q t V t dW t , Q = I n × n , (2.1) igher Strong Order Methods for Itˆo SDEs on Matrix Lie Groups 3 where K t , V t ∈ R n × n are given coefficient matrices, W t denotes the standard Brownianmotion, i.e. it holds dW t ∼ N ( , dt ) and I n × n is the n -dimensional identity matrix.In general, there exists no closed form solution to (2.1). However, a solution can bedefined via a Magnus expansion Q t = Q ψ ( Ω t ) (see [8,14,17]), where Ω t ∈ R n × n obeys the following matrix SDE d Ω t = A ( Ω t ) dt + Γ ( Ω t ) dW t , Ω = n × n . (2.2)The drift and diffusion coefficient are given by A ( Ω t ) = d ψ − − Ω t (cid:16) K t − V t − C ( Ω t ) (cid:17) , Γ ( Ω t ) = d ψ − − Ω t ( V t ) (2.3)with C ( Ω t ) = (cid:16) dd Ω t d ψ − Ω t (cid:0) Γ ( Ω t ) (cid:1)(cid:17) Γ ( Ω t ) (2.4)(see appendix for proof). For Q t ∈ G , the solution of the matrix SDE (2.2) Ω t is anelement of the Lie algebra g . The mapping ψ : g → G is considered to be a localdiffeomorphism between the Lie algebra and the corresponding Lie group near Ω = n × n .2.1 The exponential map as local parametrizationA common choice for ψ ( Ω ) is exp ( Ω ) = ∑ ∞ k = k ! Ω k with the derivative (cid:16) dd Ω exp ( Ω ) (cid:17) H = (cid:0) d exp Ω ( H ) (cid:1) exp ( Ω ) = exp ( Ω ) (cid:0) d exp − Ω ( H ) (cid:1) where d exp − Ω ( H ) = ∞ ∑ k = ( k + ) ! ad k − Ω ( H ) . (2.5)The inverse of d exp is given in the following Lemma [8, p. 84]. Lemma 2.1 (Baker, 1905)
If the eigenvalues of the linear operator ad Ω are differentfrom ℓ π i with ℓ ∈ {± , ± , . . . } , then d exp − Ω is invertible. Furthermore, we havefor k Ω k < π that d exp − − Ω ( H ) = ∞ ∑ k = B k k ! ad k − Ω ( H ) , (2.6) where B k are the Bernoulli numbers, defined by ∑ ∞ k = ( B k / k ! ) x k = x / ( e x − ) . We recall that the first three Bernoulli numbers are given by B = B = − , B = and that B m + = m ∈ N .By ad Ω ( H ) = [ Ω , H ] = Ω H − H Ω we express the adjoint operator which is usediterativelyad Ω ( H ) = H , ad k Ω ( H ) = (cid:2) Ω , ad k − Ω ( H ) (cid:3) = ad Ω (cid:0) ad k − Ω ( H ) (cid:1) , k ≥ . With these expressions and Itˆo rules the coefficient in (2.4) can be simplified to C ( Ω t ) = ∞ ∑ p = ∞ ∑ q = ( p + q + ) ( − ) p p ! ( q + ) ! ad p Ω t (cid:16) ad Γ ( Ω t ) (cid:0) ad q Ω t (cid:0) Γ ( Ω t ) (cid:1)(cid:1)(cid:17) , (2.7)we refer to [17] for the concise derivation of this expression. Michelle Muniz et al. ψ = exp, the question arises whether there is anothermapping ψ : g → G , which is not based on the evaluation of an infinite number ofsummands. In case of a quadratic Lie group the answer is yes, there is a mapping,namely the Cayley transformation cay ( Ω ) = ( I − Ω ) − ( I + Ω ) . A quadratic Lie group G is a set of matrices Q that fulfill the equation Q ⊤ PQ = P fora given constant matrix P . For the derivative of cay ( Ω ) we have (cid:16) dd Ω cay ( Ω ) (cid:17) H = (cid:0) d cay Ω ( H ) (cid:1) cay ( Ω ) = cay ( Ω ) (cid:0) d cay − Ω ( H ) (cid:1) . The analogue expression to (2.5) reads d cay − Ω ( H ) = ( I + Ω ) − H ( I − Ω ) − with the inverse given by d cay − − Ω ( H ) = ( I + Ω ) H ( I − Ω ) , (2.8)see [8]. Using the Cayley map as local parametrization the coefficient C ( Ω t ) in (2.4)is given by C ( Ω t ) = V t Ω t V t (2.9)(see appendix for proof).2.3 Example: SDEs on SO( n )As an example for a matrix Lie group, we take a closer look on the special orthogonalgroup SO ( n ) = { X ∈ GL ( n ) : X ⊤ X = I , det ( X ) = } which is a quadratic Lie group, such that the Cayley map is also applicable as alocal parametrization. The corresponding Lie algebra consists of skew-symmetricmatrices, so ( n ) = { Y ∈ GL ( n ) : Y + Y ⊤ = } . Since we are interested in structure preservation we need conditions that tell us whenthe solution of an SDE on SO( n ) is kept on the manifold. Theorem 2.1
For the solution Q t of (2.1) it holds Q t ∈ SO(n) if and only if thecoefficient matrices satisfy V t ∈ so ( n ) and K t + K ⊤ t = V t . For the proof of this theorem we refer to [17]. igher Strong Order Methods for Itˆo SDEs on Matrix Lie Groups 5
Applying standard numerical methods for SDEs directly to the linear matrix-valuedItˆo SDE (2.1) will result in a drift off , i.e. the numerical approximations do not stayon the manifold. Consequently, one needs to consider special numerical methods thatpreserve the geometric properties of the Lie group G .As the Lie algebra g represents a linear space with Euclidean-like geometry, itappears reasonable to compute the numerical approximations of the matrix SDE (2.2)and to project the solution back onto the Lie group G .A simple scheme based on the Runge-Kutta–Munthe-Kaas schemes for ODEs[20] that puts the described approach into practice can be found in [16] and is pre-sented in the following algorithm. Algorithm 3.1
Divide the time interval [ , T ] uniformly into J subintervals [ t j , t j + ] ,j = , , . . . , J − and define the time step ∆ = t j + − t j . Let Q t = Q ψ ( Ω t ) with ψ : g → G be a local parametrization of the Lie group G. Starting with t = ,Q = I n × n and Ω = n × n the following steps are repeated over successive intervals [ t j , t j + ] until t j + = T .1.
Initialization step:
Let Q j be the approximation of Q t at time t = t j .2. Numerical method step:
Compute an approximation Ω ≈ Ω ∆ by applying astochastic Itˆo-Taylor or stochastic Runge-Kutta method to the matrix SDE (2.2) .3. Projection step:
Set Q j + = Q j ψ ( Ω ) . The order of convergence of these Lie group structure-preserving schemes clearlydepends on the numerical method used in the second step of the algorithm. In order toanalyze the accuracy of our geometric numerical methods we recall that an approx-imating process X ∆ t is said to converge in a strong sense with order γ > X t if there exists a finite constant K and a ∆ ′ > E [ | X T − X ∆ T | ] ≤ K ∆ γ (3.1)for any time discretization with maximum step size ∆ ∈ ( , ∆ ′ ) [12].3.1 Geometric schemes of strong order 1Using the Euler-Maruyama scheme in the numerical method step of Algorithm 3.1results in Ω = Ω + A ( Ω ) ∆ + Γ ( Ω ) ∆ W = d ψ − − Ω (cid:16) K j − V j (cid:17) ∆ + d ψ − − Ω ( V j ) ∆ W , Q j + = Q j ψ ( Ω ) , (3.2)where ∆ W ∼ N ( , ∆ ) . Note that C ( Ω ) = n × n for both mappings ψ = exp (see(2.7)) and ψ = cay (see (2.9)) which is why we neglect this coefficient from here on. Michelle Muniz et al.
Since this scheme (3.2) preserves the geometry of the Lie group G it was calledthe geometric Euler-Maruyama scheme [17]. It can be specified according to themapping.For ψ = exp, we get Ω = d exp − − Ω (cid:16) K j − V j (cid:17) ∆ + d exp − − Ω ( V j ) ∆ W = (cid:16) K j − V j (cid:17) ∆ + V j ∆ W , Q j + = Q j exp ( Ω ) , (3.3)where inserting Ω = n × n is equivalent to truncating the infinite series (2.6) after thefirst summand, right before any dependence on Ω appears.Using ψ = cay instead, we obtain Ω = d cay − − Ω (cid:16) K j − V j (cid:17) ∆ + d cay − − Ω ( V j ) ∆ W = (cid:16) K j − V j (cid:17) ∆ + V j ∆ W , Q j + = Q j cay ( Ω ) = Q j ( I − Ω ) − ( I + Ω ) . In both cases we see that the diffusion term is only dependent on time and noton the solution itself. This is called additive noise [12] and it is the reason why theseschemes have strong order γ = γ = . γ = γ = γ = . Ω = A ( Ω ) ∆ + Γ ( Ω ) ∆ W + Γ ′ Γ ( Ω ) (cid:0) ( ∆ W ) − ∆ (cid:1) + A ′ Γ ( Ω ) ∆ Z + (cid:18) A ′ A ( Ω ) + A ′′ Γ ( Ω ) (cid:19) ∆ + (cid:18) Γ ′ A ( Ω ) + Γ ′′ Γ ( Ω ) (cid:19) ( ∆ W ∆ − ∆ Z )+ (cid:0) Γ ′ Γ ( Ω ) (cid:1) ′ Γ ( Ω ) (cid:16) ( ∆ W ) − ∆ (cid:17) ∆ W , Q j + = Q j ψ ( Ω ) . (3.4)Representing the double integral R τ ℓ + τ ℓ R s τ ℓ dW s ds , the random variable ∆ Z is nor-mally distributed with mean E [ ∆ Z ] =
0, variance E (cid:2) ( ∆ Z ) (cid:3) = ∆ and covariance igher Strong Order Methods for Itˆo SDEs on Matrix Lie Groups 7 E [ ∆ Z ∆ W ] = ∆ . We consider the matrix derivatives as directional derivatives, e.g. A ′ H = (cid:18) dd Ω A ( Ω ) (cid:19) H = dd ε A ( Ω + ε H ) | ε = which we then evaluate at Ω . The computation of the needed matrix derivatives for ψ = exp and ψ = cay is provided in the Appendix.A strong order of γ = . γ = . Ω = + (cid:18) A ( H ) + A ( H ) (cid:19) ∆ + (cid:18) Γ ( ˜ H ) − Γ ( ˜ H ) − Γ ( ˜ H ) + Γ ( ˜ H ) (cid:19) ∆ W + (cid:18) − Γ ( ˜ H ) + Γ ( ˜ H ) + Γ ( ˜ H ) − Γ ( ˜ H ) (cid:19) √ ∆ (cid:0) ( ∆ W ) − ∆ (cid:1) + (cid:18) − Γ ( ˜ H ) + Γ ( ˜ H ) + Γ ( ˜ H ) − Γ ( ˜ H ) (cid:19) ∆ Z ∆ + (cid:0) Γ ( ˜ H ) − Γ ( ˜ H ) + Γ ( ˜ H ) (cid:1) ∆ (cid:0) ( ∆ W ) − ∆ (cid:1) ∆ W , Q j + = Q j ψ ( Ω ) , (3.5)with the stage values H = H = ˜ H = Ω , H = A ( H ) ∆ + Γ ( ˜ H ) ∆ Z ∆ , ˜ H = A ( H ) ∆ + Γ ( ˜ H ) √ ∆ , ˜ H = A ( H ) ∆ + A ( H ) ∆ − Γ ( ˜ H ) √ ∆ + Γ ( ˜ H ) √ ∆ , ˜ H = A ( H ) ∆ + A ( H ) ∆ + A ( H ) ∆ + Γ ( ˜ H ) √ ∆ − Γ ( ˜ H ) √ ∆ + Γ ( ˜ H ) √ ∆ . The exploitation of stochastic Runge-Kutta methods gives us the benefit of a derivative-free scheme. However, using the mapping ψ = exp raises the question of how largethe truncation index q must be chosen in the truncated approximation for (2.6), q ∑ k = B k k ! ad k − Ω ( H ) = H − [ − Ω , H ] + (cid:2) − Ω , [ − Ω , H ] (cid:3) + . . ., (3.6)in order to maintain a strong order of γ = .
5. More generally, a condition is neededwhich connects the truncation index q with the aimed strong convergence order γ .Inspired by [8, Theorem IV.8.5.] for Runge-Kutta–Munthe-Kaas methods to solvedeterministic matrix ODEs we formulate the following theorem. Michelle Muniz et al.
Theorem 3.2
Consider Algorithm 3.1 with ψ = exp . Let the applied stochastic Runge-Kutta method in the second step of Algorithm 3.1 be of strong order γ . If the trunca-tion index q in (3.6) satisfies q ≥ γ − , then the method of Algorithm 3.1 is of strongorder γ .Proof According to the definition of strong convergence (3.1) we have to show that E [ k Ω ∆ − Ω k ] ≤ K ∆ ( q + ) / where Ω ∆ is the exact solution of (2.2) with ψ = exp at t = ∆ , Ω is the numericalapproximation obtained in the second step of Algorithm 3.1 and K is a finite constant.Let Ω q ∆ be the exact solution of the truncated version of (2.2) with ψ = exp at t = ∆ , namely d Ω t = q ∑ k = B k k ! ad k − Ω t ( K t − V t ) + q ∑ k = B k k ! ad k − Ω t ( V t ) . Our proof is divided into six steps.
Step 1: Numerical error
We consider the absolute error in the Frobenius norm and estimate the error in the L -norm by the L -norm. Then, we use the Minkowski inequality by introducing Ω q ∆ . E [ k Ω ∆ − Ω k ] ≤ (cid:0) E (cid:2) k Ω ∆ − Ω k (cid:3)(cid:1) / ≤ (cid:0) E (cid:2) k Ω ∆ − Ω q ∆ k (cid:3)(cid:1) / + (cid:0) E (cid:2) k Ω q ∆ − Ω k (cid:3)(cid:1) / We are left with the modelling error, which corresponds to the first summand, and thenumerical error, the second summand. The numerical error can be estimated by (cid:0) E (cid:2) k Ω q ∆ − Ω k (cid:3)(cid:1) / ≤ ˜ K ∆ γ for ˜ K < ∞ , because we assume that we are applying a SRK method of strong order γ .In other words, it remains to be shown that (cid:0) E (cid:2) k Ω ∆ − Ω q ∆ k (cid:3)(cid:1) / ≤ K ∆ ( q + ) / holds for the modelling error. igher Strong Order Methods for Itˆo SDEs on Matrix Lie Groups 9 Step 2: Itˆo isometry
Inserting the integral equation of (2.2) and its truncated version, we get (cid:0) E (cid:2) k Ω ∆ − Ω q ∆ k (cid:3)(cid:1) / = E (cid:20)(cid:13)(cid:13)(cid:13) Z ∆ ∞ ∑ k = q + B k k ! ad k − Ω s ( K s − V s ) ds + Z ∆ ∞ ∑ k = q + B k k ! ad k − Ω s ( V s ) dW s (cid:13)(cid:13)(cid:13) (cid:21)! / ≤ E (cid:20)(cid:13)(cid:13)(cid:13) Z ∆ ∞ ∑ k = q + B k k ! ad k − Ω s ( K s − V s ) ds (cid:13)(cid:13)(cid:13) (cid:21)! / + E (cid:20)(cid:13)(cid:13)(cid:13) Z ∆ ∞ ∑ k = q + B k k ! ad k − Ω s ( V s ) dW s (cid:13)(cid:13)(cid:13) (cid:21)! / ≤ Z ∆ E (cid:20)(cid:13)(cid:13)(cid:13) ∞ ∑ k = q + B k k ! ad k − Ω s ( K s − V s ) (cid:13)(cid:13)(cid:13) (cid:21) ds ! / + Z ∆ E (cid:20)(cid:13)(cid:13)(cid:13) ∞ ∑ k = q + B k k ! ad k − Ω s ( V s ) (cid:13)(cid:13)(cid:13) (cid:21) ds ! / ≤ Z ∆ E (cid:20)(cid:16) ∞ ∑ k = q + | B k | k ! (cid:13)(cid:13) ad k − Ω s ( K s − V s ) (cid:13)(cid:13)(cid:17) (cid:21) ds ! / + Z ∆ E (cid:20)(cid:16) ∞ ∑ k = q + | B k | k ! (cid:13)(cid:13) ad k − Ω s ( V s ) (cid:13)(cid:13)(cid:17) (cid:21) ds ! / , where we also used the Minkowski inequality, the Itˆo isometry and the properties ofa matrix norm. Now, the summands in the last line differ only in the input matrix ofthe adjoint operator. Step 3: Adjoint operator
We estimate the Frobenius norm of the adjoint operator of V s for a fixed s ∈ [ , ∆ ] and keep in mind that analogous estimates hold for the adjoint operator of K s − V s .Since the Frobenius norm is submultiplicative, we have k ad − Ω s ( V s ) k = k [ − Ω s , V s ] k ≤ k Ω s V s k + k V s Ω s k ≤ k Ω s kk V s k . As a direct consequence, it holds k ad k − Ω s ( V s ) k ≤ k k Ω s k k k V s k , which can also be shown via induction. Inserting this result in the expected valueconsidered in the last line of the previous step, we get E (cid:20)(cid:16) ∞ ∑ k = q + | B k | k ! (cid:13)(cid:13) ad k − Ω s ( V s ) (cid:13)(cid:13)(cid:17) (cid:21) ≤ E (cid:20)(cid:16) ∞ ∑ k = q + | B k | k ! 2 k k Ω s k k k V s k (cid:17) (cid:21) = k V s k E (cid:20)(cid:16) ∞ ∑ k = q + | B k | k ! 2 k k Ω s k k (cid:17) (cid:21) . Step 4: Estimate for the remainder
It is known that the Bernoulli numbers are implicitly defined by ∑ ∞ k = ( B k / k ! ) x k = x / ( e x − ) . Inserting the absolute values of the Bernoulli numbers instead, it holds ∞ ∑ k = | B k | k ! x k = x (cid:16) + cot (cid:16) x (cid:17)(cid:17) + . Let f : I → R , x x (cid:0) + cot (cid:0) x (cid:1)(cid:1) + I = { x ∈ R : x π Z } . Applying Taylor’stheorem to the function f at the point 0 reads f ( x ) = q ∑ k = f ( k ) ( ) k ! x k + R q ( x ) , R q ( x ) = f ( q + ) ( ξ )( q + ) ! x q + , where we consider the Lagrange form of the remainder for some real number ξ be-tween 0 and x .Setting x = k Ω s k and recalling that the expression (2.6) only converges for k Ω k < π , we now consider f | ˜ I : ˜ I → R , x x (cid:0) + cot (cid:0) x (cid:1)(cid:1) + I = { x ∈ R : | x | < π } . The restriction of f to ˜ I is bounded, in particular there exists an upper bound M q such that | f | ( q + ) ˜ I ( ξ ) | ≤ M q for all ξ between 0 and x . Moreover, the followingestimate for the remainder holds | R q ( x ) | = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) f | ( q + ) ˜ I ( ξ )( q + ) ! x q + (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ M q ( q + ) ! | x | q + ≤ M q ( q + ) ! ( k Ω s k ) q + . Using this estimate in the expected value of the last line of the previous stepresults in E (cid:20)(cid:16) ∞ ∑ k = q + | B k | k ! ( k Ω s k ) k (cid:17) (cid:21) ≤ E (cid:20)(cid:16) M q ( q + ) ! ( k Ω s k ) q + (cid:17) (cid:21) = (cid:18) q + M q ( q + ) ! (cid:19) E (cid:2) k Ω s k q + (cid:3) . Step 5: Itˆo-Taylor expansion
The goal of this step is to find an estimate for E (cid:2) k Ω s k q + (cid:3) . For this purpose, weexamine the following Itˆo-Taylor expansion Ω ∆ = Γ ( Ω ) Z ∆ dW s + R ∆ = V W ∆ + R ∆ , E (cid:2) k R ∆ k (cid:3) ≤ C ∆ , where C is a finite constant, for details see [12, Proposition 5.9.1]. Hence, the Frobe-nius norm of Ω s can be estimated by k Ω s k = k V W s + R s k ≤ k V k| W s | + k R s k . igher Strong Order Methods for Itˆo SDEs on Matrix Lie Groups 11 This result allows us to use the formula for the moments of the Wiener and the esti-mate for the remainder of the Itˆo-Taylor expansion, E (cid:2) k Ω s k q + (cid:3) ≤ q + (cid:16) k V k q + E h W ( q + ) s i + E h k R s k ( q + ) i(cid:17) ≤ q + (cid:18) k V k q + ( ( q + )) !2 q + ( q + ) ! s q + + C s ( q + ) (cid:19) . Step 6: Overall estimate
Gathering the results of the previous steps and inserting a Taylor expansion for V s where C < ∞ reads E (cid:20)(cid:16) ∞ ∑ k = q + | B k | k ! (cid:13)(cid:13) ad k − Ω s ( V s ) (cid:13)(cid:13)(cid:17) (cid:21) ≤ k V s k (cid:18) q + M q ( q + ) ! (cid:19) q + (cid:18) k V k q + ( ( q + )) !2 q + ( q + ) ! s q + + C s ( q + ) (cid:19) ≤ ( k V k + C s ) (cid:18) q + M q ( q + ) ! (cid:19) q + (cid:18) k V k q + ( ( q + )) !2 q + ( q + ) ! s q + + C s ( q + ) (cid:19) = O ( s q + ) . Thus, it holds Z ∆ E (cid:20)(cid:16) ∞ ∑ k = q + | B k | k ! (cid:13)(cid:13) ad k − Ω s ( V s ) (cid:13)(cid:13)(cid:17) (cid:21) ds ! / = O ( ∆ ( q + ) / ) . Analogously, one can show that Z ∆ E (cid:20)(cid:16) ∞ ∑ k = q + | B k | k ! (cid:13)(cid:13) ad k − Ω s ( K s − V s ) (cid:13)(cid:13)(cid:17) (cid:21) ds ! / = O ( ∆ ( q + ) / ) , which concludes the proof. ⊓⊔ Note that due to the definition of the Cayley map as a finite product of matri-ces no such theorem is needed if ψ = cay is chosen as the local parametrization inAlgorithm 3.1.We further point out that Theorem 3.2 is in accordance with our results of Sec-tion 3.1, where the geometric Euler-Maruyama scheme (3.3) can be interpreted as astochastic RKMK method with γ = q = In the following we provide numerical examples which illustrate the effectivenessof the proposed geometric methods, firstly, by simulating the strong convergenceorder of the proposed schemes and secondly, by showing the Lie group structurepreservation of our methods.For checking the convergence order, we set G = SO(3) and g = so ( ) . In order toensure the conditions of Theorem 2.1 we have used the set up of matrices K t and V t proposed by Muniz et al. [19]. Specifically, we chose the time-dependent functions f ( t ) = cos ( t ) , f ( t ) = sin ( t ) , f ( t ) = + t + t + t , to compute a skew-symmetric matrix V t as a linear combination, V t = f ( t ) G + f ( t ) G + f ( t ) G , where G i , i = , , so ( ) , G = − , G = −
10 0 01 0 0 , G = −
10 1 0 . Note that the functions f i , i = , , K t as the lower triangular matrix of V t where the diagonal entries of K t are 0.5 timesthe diagonal entries of V t such that K t + K ⊤ t = V t .We simulated M = U , U ∼ N ( , ) . Then, the ran-dom variables used in the numerical method step in Algorithm 3.1 were simulatedas d ∆ W = U √ ∆ and c ∆ Z = ∆ ( d ∆ W + U q ∆ ) . The absolute error as defined in (3.1)was estimated by using the Frobenius norm at t j = T , i.e. by1 M M ∑ i = (cid:16)(cid:13)(cid:13) Q ref T , i − Q ∆ T , i (cid:13)(cid:13) F (cid:17) where the approximations Q ∆ T , i were obtained by using Algorithm 3.1 with step sizes ∆ = − , − , − , − , − , − and for the reference solution Q ref T , i we used thesame method with ψ = cay and step size ∆ = − , respectively.A log-log plot of the estimation of the absolute error against the step sizes can beviewed in Figure 4.1. It indicates the strong order of convergence claimed in the sec-tions above for the geometric Euler-Maruyama scheme (3.2), the geometric version ofthe Itˆo-Taylor scheme (3.4) and the geometric stochastic Runge-Kutta scheme (3.5).Examples from financial mathematics and multibody system dynamics verify thatthe structure-preserving methods derived above can be applied in practice.In the first example we apply our methods of strong order γ = . igher Strong Order Methods for Itˆo SDEs on Matrix Lie Groups 13 -4 step size -7 -6 -5 -4 -3 -2 -1 E ( || Q T r e f - Q T || ) gEM (exp)gEM (cay) -4 step size -7 -6 -5 -4 -3 -2 -1 E ( || Q T r e f - Q T || ) gIT (exp)gIT (cay) -4 step size -7 -6 -5 -4 -3 -2 -1 E ( || Q T r e f - Q T || ) gSRK (exp)gSRK (cay) Fig. 4.1
Simulation of the strong convergence order for M = R hist0 = (cid:18) − . − . (cid:19) . Furthermore, we estimate a density function from the historical data using kernelsmoothing functions, which is also plotted in Figure 4.3. For more details on thedensity estimation see [2].As a first step, we focus on covariance matrices P t , t ≥
0. The authors of [26]utilised the principal axis theorem and defined the covariance flowP t = Q ⊤ t P Q t , t ≥ , (4.1)where P is the initial covariance matrix computed based on R hist0 and Q t is an orthog-onal matrix which without loss of generality can be assumed to have determinant +1, Jan-2005 Apr-2005 Jul-2005 Oct-2005 Jan-2006
Date -0.4-0.3-0.2-0.100.10.20.30.4 H i s t o r i c a l C o rr e l a t i on Fig. 4.2
The 30-day historical correlations between S&P 500 and Euro/US-Dollar exchange rate, sourceof data: . -0.6 -0.4 -0.2 0 0.2 0.4 0.600.511.522.5 Fig. 4.3
Empirical density function of the historical correlation between S&P 500 and Euro/US-Dollarexchange rate, computed with the MATLAB function ksdensity . i.e. Q t ∈ SO ( ) . Following the approach in [19] the matrix Q t is now assumed to bedriven by the SDE (2.1) which can be solved by using Algorithm 3.1. With the re-sulting matrices approximations of P t can be computed with (4.1), which can then betransformed to corresponding correlation matrices R t = Σ − t P t Σ − t with Σ t = (cid:0) diag ( P t ) (cid:1) .At last, a density function is estimated from this correlation flow and the freeparameters involved are calibrated such that the density function matches the densityfunction from the historical data, see [19] for details.We executed this procedure using the geometric Itˆo-Taylor scheme (3.4) with ψ = cay (gIT) and the geometric R¨oßler scheme (3.5) with ψ = exp and truncationindex q = igher Strong Order Methods for Itˆo SDEs on Matrix Lie Groups 15 -0.6 -0.4 -0.2 0 0.2 0.4 0.600.511.522.5 historicalgITgSRK Fig. 4.4
Empirical density function of the historical correlation and the correlation flow between S&P 500and Euro/US-Dollar exchange rate. y = ( y , y , y ) ⊤ represent the angular momentum in the body frame and I , I and I be the principal moments of inertia [18]. Then the motion of this free rigid body isdescribed by the Euler equations˙ y = V ( y ) y , V ( y ) = y / I − y / I − y / I y / I y / I − y / I . We suppose that the rigid body is perturbed by a Wiener process W t and compute amatrix K ( y ) such that the dynamics are kept on the manifold, i.e. we compute K ( y ) from the condition K ( y ) + K ⊤ ( y ) = V ( y ) . Consequently, we regard the Itˆo SDE dy = K ( y ) y dt + V ( y ) y dW t , (4.2)where the solution evolves on the unit sphere if the initial value y satisfies | y | = y = Qy where Q ∈ SO ( ) , wefocus on the nonlinear matrix SDE dQ = K ( Q ) Q dt + V ( Q ) Q dW t , Q = I × . (4.3)The coefficients of the corresponding SDE in the Lie algebra (2.2) read A ( Ω ) = d ψ − Ω (cid:16) K (cid:0) ψ ( Ω ) Q (cid:1) − V (cid:0) ψ ( Ω ) Q (cid:1)(cid:17) , Γ ( Ω ) = d ψ − Ω (cid:16) V (cid:0) ψ ( Ω ) Q (cid:1)(cid:17) . (4.4) Fig. 4.5
Sample paths of the geometric Euler-Maruyama (blue) and the traditional Euler-Maruyamascheme (red) applied to (4.2).
Now, SDE (4.2) can be solved by applying Algorithm 3.1 to the SDE (4.3). Note thatwe deal with right multiplication of the solution Q on the right hand side of (4.3)instead of left multiplication as in (2.1). As a consequence, the sign of the indexof the operator d ψ − is changed in (4.4) and the solution of the Projection step inAlgorithm 3.1 should be Q j + = ψ ( Ω j + ) Q j . We refer to [20] for more details onthis matter.In Figure 4.5 we simulated 200 steps of the trajectory of (4.2) with a step size of ∆ = .
03 by using Algorithm 3.1 with the initial values y = ( sin ( . ) , , cos ( . )) ⊤ and the moments of inertia I = I = I = /
3. For the numerical methodstep of Algorithm 3.1 we used the Euler-Maruyama scheme with ψ = cay. Empha-sizing the structure-preserving character of Algorithm 3.1 we also plotted a samplepath of the traditional Euler-Maruyama scheme applied directly to (4.2), whose tra-jectory clearly fails to stay on the manifold. This phenomenon can also be viewedin Figure 4.6 where we visualize the distance of the approximate solutions from themanifold. We have presented stochastic Lie group methods for linear Itˆo SDEs on matrix Liegroups that have a higher strong convergence order than the known geometric Euler-Maruyama scheme. Based on RKMK methods for ODEs on Lie groups, we haveproven a condition on the truncation index of the inverse of d exp ( H ) such that thestochastic RKMK method inherits the convergence order of the underlying SRK. Ad- igher Strong Order Methods for Itˆo SDEs on Matrix Lie Groups 17 time -15 -10 -5 l og ( d i s t an c e f r o m m an i f o l d ) geometric Euler-MaruyamaEuler-Maruyama Fig. 4.6
Log-distance of the numerical solutions to the unit sphere. ditionally, we have shown examples for the application of our methods in mechanicalengineering and in financial mathematics.Our methods require further investigations for the application to nonlinear ItˆoSDEs on matrix Lie groups, which we consider as future work. Moreover, we have re-stricted our research for this paper to the strong convergence order. In future research,an investigation on the weak convergence order of stochastic Lie group methods willalso be conducted.
Acknowledgements
The authors would like to thank Martin Friesen (Dublin City University) for thein-depth discussions that improved the content of this paper.The work of the authors was partially supported by the bilateral German-Slovakian Project
MATTHIAS– Modelling and Approximation Tools and Techniques for Hamilton-Jacobi-Bellman equations in financeand Innovative Approach to their Solution , financed by DAAD and the Slovakian Ministry of Education.Further the authors acknowledge partial support from the bilateral German-Portuguese Project
FRACTAL– FRActional models and CompuTationAL Finance financed by DAAD and the CRUP - Conselho de Re-itores das Universidades Portuguesas.
References
1. Ableidinger, M., Buckwar E.: Weak stochastic Runge-Kutta Munthe-Kaas methods for finite spin en-sembles. Appl. Numer. Math. , 50–63 (2017)2. Bowman, A.W., Azzalini, A.: Applied Smoothing Techniques for Data Analysis. Oxford UniversityPress, New York (1997)3. Burrage, K., Burrage, P.M.: High strong order methods for non-commutative stochastic ordinary dif-ferential equation systems and the Magnus formula. Phys. D , 34–48 (1999)4. Celledoni, E., Marthinsen, H., Owren, B.: An introduction to Lie group integrators - basics, new devel-opments and applications. J. Comput. Phys. , Part B, 1040–1061 (2014)5. Chirikjian, G.S.: Stochastic Models, Information Theory, and Lie Groups, Volume 1: Classical Resultsand Geometric Methods. Springer Science & Business Media, Boston (2009)8 Michelle Muniz et al.6. Chirikjian, G.S.: Stochastic Models, Information Theory, and Lie Groups, Volume 2: Analytic Methodsand Modern Applications. Springer Science & Business Media, Boston (2011)7. Crouch, P.E., Grossman, R.: Numerical integration of ordinary differential equations on manifolds. J.Nonlin. Sci. , 1–33 (1993)8. Hairer, E., Lubich, C., Wanner, G.: Geometric Numerical Integration. Springer Series in ComputationalMathematics, Vol. 31. Springer Verlag, 2nd edition, Berlin Heidelberg New York (2006)9. Hall, B.C.: Lie Groups, Lie Algebras, and Representations. Graduate Texts in Mathematics, Vol. 222.Springer Verlag, 2nd edition, Heidelberg (2015)10. Iserles, A., Munthe-Kaas, H.Z., Nørsett, S.P., Zanna, A.: Lie group methods. Acta Numerica , 215–365 (2005)11. Kamm, K., Pagliarani S., Pascucci, A.: On the stochastic Magnus expansion and its application toSPDEs. arXiv preprint 2001.01098 (2020)12. Kloeden, P.E., Platen, E.: Numerical Solution of Stochastic Differential Equations. Springer, Berlin(1992)13. Lim, N., Privault, N.: Analytic bond pricing for short rate dynamics evolving on matrix Lie groups.Quant. Fin. (1), 119–129 (2016)14. Magnus, W.: On the exponential solution of differential equations for a linear operator. Comm. PureAppl. Math. , 649–673 (1954)15. Malham, S.J.A., Wiese, A.: Stochastic Lie group Integrators. SIAM J. Sci. Comput. (2), 597–617(2008)16. Marjanovic, G., Piggott, M.J., Solo, V.: A simple approach to numerical methods for stochastic differ-ential equations in Lie groups. In: Proceedings of the 54th IEEE Conference on Decision and Control,IEEE, Osaka, Japan, pp. 7143–7150, December 201517. Marjanovic, G., Solo, V.: Numerical Methods for Stochastic Differential Equations in Matrix LieGroups Made Simple. IEEE Trans. Auto. Contr. (12), 4035–4050 (2018)18. Marsden,J.E., Ratiu, T.S.: Introduction to mechanics and symmetry. Springer Verlag, 2nd edition,New York (1999)19. Muniz, M., Ehrhardt, M., G¨unther, M.: Approximating correlation matrices using stochastic Lie groupmethods. Mathematics (1):94 (2021)20. Munthe-Kaas, H.: Runge-Kutta methods on Lie groups. BIT Numer. Math. (1), 92–111 (1998)21. Munthe-Kaas, H.: High order Runge-Kutta methods on manifolds. Appl. Numer. Math. , 115–127(1999)22. Park, F.C., Chun, C.M., Han, C.W., Webber, N.: Interest rate models on Lie groups. Quant. Fin. (4),559–572 (2010)23. Piggott, M.J., Solo, V.: Geometric Euler-Maruyama schemes for stochastic differential equations inSO(n) and SE(n). SIAM J. Numer. Anal. (4), 2490–2516 (2016)24. R¨oßler, A.: Explicit Order 1.5 Schemes for the Strong Approximation of Itˆo Stochastic DifferentialEquations. PAMM (1), 817–818 (2005)25. Soatto, S., Perona, P., Frezza R., Picci, G.: Motion Estimation via Dynamic Vision. In: Proceedings ofthe 33rd IEEE Conference on Decision and Control. Vol.4. IEEE, Piscataway, NJ, pp. 3253–3258, 199426. Teng, L., Wu, X., G¨unther, M., Ehrhardt, M.: A new methodology to create valid time-dependentcorrelation matrices via isospectral flows. ESAIM: Math. Model. Numer. Anal. (2), 361–371 (2020)27. Wang, Z., Ma, Q., Yao Z., Ding, X.: The Magnus Expansion for Stochastic Differential Equations. J.Nonlin. Sci. , 419–447 (2020)28. Zuyev, A., Vasylieva, I.: Partial stabilization of stochastic systems with application to rotating rigidbodies. IFAC-PapersOnLine (16), 162–167 (2019) A Proofs
Theorem A.1
The solution of (2.1) can be written as Q t = Q ψ ( Ω t ) , where Ω t obeys the SDE (2.2) withcoefficients given by A ( Ω t ) = d ψ − − Ω t (cid:16) K t − V t − C ( Ω t ) (cid:17) , Γ ( Ω t ) = d ψ − − Ω t ( V t ) with C ( Ω t ) = (cid:16) dd Ω t d ψ − Ω t (cid:0) Γ ( Ω t ) (cid:1)(cid:17) Γ ( Ω t ) . (A.1)igher Strong Order Methods for Itˆo SDEs on Matrix Lie Groups 19 Proof
Let f : g → G be the chosen local parametrization such that Q t = f ( Ω t ) with f ( Ω t ) = Q ψ ( Ω t ) .Due to the Itˆo rules dQ t = d ( f ( Ω t )) is given exactly by the first two terms of the Taylor expansion dQ t = dd ε f ( Ω t + ε d Ω t ) (cid:12)(cid:12)(cid:12) ε = + d d ε f ( Ω t + ε d Ω t ) (cid:12)(cid:12)(cid:12) ε = = (cid:16) dd Ω t f ( Ω t ) (cid:17) d Ω t + (cid:16) d d Ω t f ( Ω t ) (cid:17) ( d Ω t ) = (cid:18)(cid:16) dd Ω t f ( Ω t ) (cid:17) A ( Ω t ) + (cid:16) d d Ω t f ( Ω t ) (cid:17) Γ ( Ω t ) (cid:19) dt + (cid:16) dd Ω t f ( Ω t ) (cid:17) Γ ( Ω t ) dW t where we have used the fact that ( d Ω t ) = (cid:0) A ( Ω t ) dt + Γ ( Ω t ) dW t (cid:1) = Γ ( Ω t ) dt . For both ψ = exp and ψ = cay it holds that (cid:16) dd Ωψ ( Ω ) (cid:17) H = (cid:0) d ψ Ω ( H ) (cid:1) ψ ( Ω ) = ψ ( Ω )( d ψ − Ω ( H )) , which we use to specify the first part of the drift coefficient (cid:16) dd Ω t f ( Ω t ) (cid:17) A ( Ω t ) = Q (cid:16) dd Ω t ψ ( Ω t ) (cid:17) A ( Ω t ) = Q ψ ( Ω t ) d ψ − Ω t (cid:0) A ( Ω t ) (cid:1) = Q t d ψ − Ω t (cid:0) A ( Ω t ) (cid:1) . Analogously, we have (cid:0) dd Ω t f ( Ω t ) (cid:1) Γ ( Ω t ) = Q t d ψ − Ω t (cid:0) Γ ( Ω t ) (cid:1) . For the second derivative we obtain (cid:16) d d Ω t f ( Ω t ) (cid:17) Γ ( Ω t ) = (cid:16) dd Ω Q t d ψ − Ω t (cid:0) Γ ( Ω t ) (cid:1)(cid:17) Γ ( Ω t )= (cid:18)(cid:16) dd Ω t Q t (cid:17) Γ ( Ω t ) (cid:19) d ψ − Ω t (cid:0) Γ ( Ω t ) (cid:1) + Q t (cid:16) dd Ω t d ψ − Ω t (cid:0) Γ ( Ω t ) (cid:1)(cid:17) Γ ( Ω t )= Q t (cid:16) d ψ − Ω t (cid:0) Γ ( Ω t ) (cid:1)(cid:17) + Q t C ( Ω t ) , where C ( Ω t ) = (cid:16) dd Ω t d ψ − Ω t (cid:0) Γ ( Ω t ) (cid:1)(cid:17) Γ ( Ω t ) . Comparing these results with SDE (2.1) we get V t = d ψ − Ω t (cid:0) Γ ( Ω t ) (cid:1) and K t = d ψ − Ω t (cid:0) A ( Ω t ) (cid:1) + (cid:0) d ψ − Ω t (cid:0) Γ ( Ω t ) (cid:1)(cid:1) + C ( Ω t ) and thus Γ ( Ω t ) = d ψ − − Ω t ( V t ) and A ( Ω t ) = d ψ − − Ω t (cid:16) K t − V t − C ( Ω t ) (cid:17) . ⊓⊔ Lemma A.1
For ψ = cay the coefficient (A.1) is given byC ( Ω t ) = V t Ω t V t . (A.2) Proof (cid:18) dd Ω t d cay − Ω t ( H ) (cid:19) ˜ H = (cid:18) dd Ω t (cid:16) ( I + Ω t ) − H ( I − Ω t ) − (cid:17)(cid:19) ˜ H = (cid:18)(cid:16) dd Ω t ( I + Ω t ) − (cid:17) ˜ H (cid:19) H ( I − Ω t ) − + ( I + Ω t ) − H (cid:18) dd Ω t ( I − Ω t ) − (cid:19) ˜ H = − ( I + Ω t ) − ˜ H ( I + Ω t ) − H ( I − Ω t ) − + ( I + Ω t ) − H ( I − Ω t ) − ˜ H ( I − Ω t ) − H and ˜ H the diffusion coefficient Γ ( Ω t ) = d cay − − Ω t ( V t ) = ( I + Ω t ) V t ( I − Ω t ) we get (cid:18) dd Ω t d cay − Ω t (cid:0) Γ ( Ω t ) (cid:1)(cid:19) Γ ( Ω t )= − ( I + Ω t ) − Γ ( Ω t )( I + Ω t ) − Γ ( Ω t )( I − Ω t ) − + ( I + Ω t ) − Γ ( Ω t )( I − Ω t ) − Γ ( Ω t )( I − Ω t ) − = − ( I + Ω t ) − ( I + Ω t ) V t ( I − Ω t )( I + Ω t ) − ( I + Ω t ) V t ( I − Ω t )( I − Ω t ) − + ( I + Ω t ) − ( I + Ω t ) V t ( I − Ω t )( I − Ω t ) − ( I + Ω t ) V t ( I − Ω t )( I − Ω t ) − = − V t ( I − Ω t ) V t + V t ( I + Ω t ) V t = V t Ω t V t . ⊓⊔ B Matrix derivatives
In this section we provide the matrix derivatives that we used in the geometric version of the Itˆo-Taylorscheme of strong order γ = . B.1 Derivatives for ψ = cay Computing the derivative of (2.8) in the direction of an arbitrary matrix ˜ H , we get (cid:18) dd Ω d cay − − Ω ( H ) (cid:19) ˜ H = (cid:18) dd Ω ( H − H Ω + Ω H − Ω H Ω ) (cid:19) ˜ H = ddt (cid:0) H − H ( Ω + t ˜ H ) + ( Ω + t ˜ H ) H − ( Ω + t ˜ H ) H ( Ω + t ˜ H ) (cid:1)(cid:12)(cid:12) t = = ( − H ˜ H + ˜ HH − Ω H ˜ H − ˜ HH Ω ) . (B.1)Subsequently, the second directional derivative reads (cid:18) d d Ω d cay − − Ω ( H ) (cid:19) ˜ H = − ˜ HH ˜ H . (B.2)Inserting H = V and ˜ H = Γ ( Ω ) = d cay − − Ω ( V ) , we obtain (cid:18) dd ΩΓ ( Ω ) (cid:19) Γ ( Ω ) = (cid:18) dd Ω d cay − − Ω ( V ) (cid:19) Γ ( Ω )= ( − V Γ ( Ω ) + Γ ( Ω ) V − Ω V Γ ( Ω ) − Γ ( Ω ) V Ω )= ( − V Ω V + V Ω V Ω − Ω V Ω V + Ω V Ω V Ω ) . Similar expressions are obtained for (cid:0) dd Ω A ( Ω ) (cid:1) Γ ( Ω ) , (cid:0) dd Ω A ( Ω ) (cid:1) A ( Ω ) and (cid:0) dd Ω Γ ( Ω ) (cid:1) A ( Ω ) byinserting H = K − V and ˜ H = Γ ( Ω ) , H = K − V and ˜ H = A ( Ω ) and H = V and ˜ H = A ( Ω ) in (B.1),respectively.Proceed accordingly to compute the second derivatives (cid:16) d d Ω A ( Ω ) (cid:17) Γ ( Ω ) and (cid:16) d d Ω Γ ( Ω ) (cid:17) Γ ( Ω ) .igher Strong Order Methods for Itˆo SDEs on Matrix Lie Groups 21 B.2 Derivatives for ψ = exp In the following we present derivatives of (2.5) up to k =
4, i.e. of ∑ k = B k k ! ad k − Ω ( H ) = H − [ − Ω , H ] + (cid:2) − Ω , [ − Ω , H ] (cid:3) − h − Ω , h − Ω (cid:2) − Ω , [ − Ω , H ] (cid:3)ii = H − ( H Ω − Ω H ) + ( Ω H + H Ω − Ω H Ω ) − ( Ω H − Ω H Ω + Ω H Ω − Ω H Ω + H Ω ) . Computing the directional derivative we get dd Ω ∑ k = B k k ! ad k − Ω ( H ) ! ˜ H = − ( H ˜ H − ˜ HH )+ ( Ω ˜ HH + ˜ H Ω H + H Ω ˜ H + H ˜ H Ω − HH Ω − Ω H ˜ H ) − (cid:0) ˜ H Ω H + Ω ˜ H Ω H + Ω ˜ H Ω H + Ω ˜ HH − ( Ω ˜ H Ω H Ω + ˜ H Ω H Ω + Ω ˜ HH Ω + Ω ˜ HH )+ ( Ω H Ω ˜ H + Ω H ˜ H Ω + Ω ˜ HH Ω + ˜ H Ω H Ω ) − ( Ω H Ω ˜ H Ω + Ω H ˜ H Ω + Ω H Ω ˜ H + ˜ HH Ω )+ H ˜ H Ω + H Ω ˜ H Ω + H Ω ˜ H Ω + H Ω ˜ H (cid:1) . Whereas the second directional derivative is given by (cid:18) d d Ω ∑ k = B k k ! ad k − Ω ( H ) (cid:19) ˜ H = ( ˜ H H + H ˜ H − HH ˜ H ) − (cid:0) ( Ω ˜ H Ω ˜ HH + Ω ˜ H Ω H + ˜ H Ω H + Ω ˜ H H + ˜ H Ω ˜ H Ω H + ˜ H Ω ˜ HH ) − ( Ω ˜ H Ω H ˜ H + Ω ˜ H H Ω + ˜ H Ω H Ω + ˜ H Ω H ˜ H + ˜ H Ω H Ω + Ω ˜ HH ˜ H )+ ( Ω H ˜ H + Ω ˜ HH Ω ˜ H + ˜ H Ω H Ω ˜ H + Ω ˜ HH ˜ H Ω + ˜ H Ω H ˜ H Ω + ˜ H H Ω ) − ( Ω H Ω ˜ H + Ω H ˜ H Ω + ˜ HH Ω ˜ H Ω + Ω H ˜ H Ω ˜ H + ˜ HH ˜ H Ω + ˜ HH Ω ˜ H )+ ( H ˜ H Ω ˜ H Ω + H ˜ H Ω + H ˜ H Ω ˜ H + H Ω ˜ H Ω ˜ H + H Ω ˜ H Ω + H Ω ˜ H ) (cid:1) . Note that evaluating the derivatives at Ω = n × n causes many summands to become zero, whichmakes computing higher summands ( k >