Algebraic discretization of time-independent Hamiltonian systems using a Lie-group/algebra approach
aa r X i v : . [ m a t h - ph ] A ug Algebraic discretization of time-independent Hamiltonian systemsusing a Lie-group/algebra approach
S. Bertrand
Czech Technical University in Prague, Faculty of Nuclear Sciences and Physical Engineering,Department of Physics, Bˇrehov´a 7, 115 19 Prague 1, Czech Republic
ARTICLE HISTORY
Compiled August 11, 2020
ABSTRACT
In this paper, time-independent Hamiltonian systems are investigated via a Lie-group/algebra formalism. The (unknown) solution linked with the Hamiltonian isconsidered to be a Lie-group transformation of the initial data, where the groupparameter acts as the time. The time-evolution generator (i.e. the Lie algebra as-sociated to the group transformation) is constructed at an algebraic level, henceavoiding discretization of the time-derivatives for the discrete case. This formalismmakes it possible to investigate the continuous and discrete versions of time for time-independent Hamiltonian systems and no additional information on the system isrequired (besides the Hamiltonian itself and the initial conditions of the solution).When the time-independent Hamiltonian system is integrable in the sense of Li-ouville, one can use the action-angle coordinates to straighten the time-evolutiongenerator and construct an exact scheme (i.e. a scheme without errors). In addition,a method to analyse the errors of approximative/numerical schemes is provided.These considerations are applied to well-known examples associated with the one-dimensional harmonic oscillator.
KEYWORDS
Lie groups and algebras; Hamiltonian systems; integrals of motion; discrete- andcontinuous-time physics; numerical analysis; action-angle coordinates; canonicaltransformation; harmonic oscillator;
1. Introduction
In physics, the concept of a discrete (space)time is gaining in interest [1]. Theories, suchas loop quantum gravity [26], use a discretization of the spacetime to solve some fun-damental problems. In numerical analysis, the choice of discretization is critical to getgood and efficient numerical approximations. The easiness with one gets approxima-tions of solutions of very complex systems allows one to guess and explore behavioursof the systems even without being able to solve differential equations. However ingeneral, from a physical or mathematical point of view, the breakdown from the con-tinuous limit to the discrete case challenges conceptual considerations. The choice ofthe discretization scheme must be a smart one to get a consistent system and stablesolutions, and to represent reality, especially. If one uses an arbitrary scheme, the nu-merical approximation could be completely wrong. Over the years, many approacheswere taken to study time discretization of dynamic equations. Geometric approaches
Email: sebastien.bertrand@fjfi.cvut.cz sing symmetries seem to provide interesting results, see e.g. [5, 6, 13] and referencestherein.Independently, the study of Lie groups and symmetries have been growing, especiallyin mathematical physics, see e.g. [22, 24, 25] and references therein. Among others,Lie symmetries have been successfully used to derive dynamical equations (such as theDirac equation [12, 20, 28]), to reduce differential equations by facilitating the findingof their solutions (see e.g. [3, 11, 16]) and even in the context of numerical analysis, e.g.[5–8, 13, 14]. A particularly interesting theorem was provided by Noether [23], linkingconservation laws to Lie symmetries. The conservation laws (or integrals of motionin the context of Hamiltonian systems) represent quantities that are preserved alongany solution. These quantities led to the study of integrable systems in the sense ofsolitons (where a system of nonlinear partial differential equations possesses an infinitenumber of conservation laws [22]) and in the Liouville sense for Hamiltonian systems.In the case where the Hamiltonian is time-independent, integrability is defined by theexistence of N integrals of motion that are functionally independent and in involutionwith respect to the Poisson bracket, where N is half the number of dimensions of thephase space. Integrable Hamiltonian systems possess a remarkable property, the ex-istence of action-angle coordinates [2, 17]. Furthermore, some integrable Hamiltoniansystems possess additional integrals of motion, they are called superintegrable. Maxi-mally superintegrable systems possess the maximal number of integrals of motion, i.e.2 N −
1, and they possess the characteristic that bounded trajectories are closed andperiodic [4, 19, 21].The goal of this paper is to investigate an algebraic discretization of time for time-independent Hamiltonian systems. In order to do so, we propose a Lie-group/algebraapproach such that time can be treated as a group parameter. We ensure that theLie-group transformation and its Lie algebra satisfy many intrinsic conditions, suchas the canonicality and the invariance of the integrals of motion under the grouptransformation. In the case where the time-independent system is integrable, we canrectify the evolution generator using the action-angle coordinates. This leads to amethod to obtain exact schemes. In addition, we study non-exact schemes. We proposea method to obtain schemes based on the above formalism and we also investigate theerrors of consistent schemes and provide methods to reduce the errors.The paper is structured as follow. In section 2, we construct the formalism, in afirst time for the continuous case, and then for the discrete case of time-independentHamiltonian systems. In section 3, we investigate the special case of time-independentintegrable Hamiltonian systems. For these considerations, we provide a step-by-stepmethod to construct exact schemes and we show the method for the well-known caseof the one-dimensional harmonic oscillator. In section 4, we apply the formalism ofsection 2 to learn more about non-exact (approximative) schemes. We illustrate theseresults using three different schemes for the one-dimensional harmonic oscillator: theexact scheme, the Euler method and the discrete gradient method. In section 5, wegive some conclusions and provide some future perspective.
2. Construction of the time-evolution generator
Let us consider a time-independent Hamiltonian H = H ( ~q, ~p ) defined on a 2 N -dimensional phase space M , where the positions and the momenta are described re-spectively by the canonical variables q j and p j , j = 1 , ..., N . The associated equations2f motion are given by˙ q j = ∂H∂p j = { q j , H } , ˙ p j = − ∂H∂q j = { p j , H } , (2.1)where the dot represents the total derivative with respect to time, i.e. ˙ f ( t ) = df ( t ) dt ,and the bracket {· , ·} is the usual Poisson bracket, i.e. { a, b } = N X j =1 (cid:18) ∂a∂q j ∂b∂p j − ∂b∂q j ∂a∂p j (cid:19) , (2.2)The canonical variables ~q , ~p satisfy the relations { q i , p j } = δ ij , { q i , q j } = { p i , p j } = 0 , i, j = 1 , ..., N, (2.3)where δ ij is the Kronecker delta. One should note that since the Hamiltonian H istime-independent, then the system possesses at least one integral of motion, namelythe Hamiltonian H itself.In addition, we will assume that the solution the Hamiltonian system (2.1) with theinitial condition q j ( t ) = c j , p j ( t ) = c j + N , j = 1 , ..., N, (2.4)exists, is unique, smooth and equivalent to a curve in a 2 N + 1 dimension space (thephase space augmented with the time dimension). Let us assume the existence of a Lie-group transformation g ∆ allowing an advancein time ∆. We require that the Lie-group transformation is linear in time, that issuccessive iterations are equivalent to one iteration of the sum of their steps, g ∆ ◦ g ∆ = g ∆ +∆ . However, instead of using an explicit advance in time, i.e. g ∆ ◦ f ( t ) = f ( t + ∆),we will consider an implicit version, that is the transformation will solely depend onthe parameter ∆ and the current-time variables ~q ( t ) and ~p ( t ), i.e. g ∆ ◦ q j ≡ Q j = F j ( ~q, ~p ; ∆) , g ∆ ◦ p j ≡ P j = G j ( ~q, ~p ; ∆) , (2.5)where the variables Q j and P j are the advanced-time variables and the transformationapplies on the current-time variables ~q ( t ) and ~p ( t ). The identity transformation is givenby taking the limit of ∆ to 0,lim ∆ → ( F j ( ~q, ~p ; ∆)) = q j , lim ∆ → ( G j ( ~q, ~p ; ∆)) = p j , (2.6)and the inverse transformation by setting ∆ to − ∆, i.e. g − = g − ∆ .The associated infinitesimal deformation, the Lie algebra, is the vector field g = N X j =1 (cid:18) ξ j ( ~q, ~p ) ∂∂q j + η j ( ~q, ~p ) ∂∂p j (cid:19) , (2.7)3here the functions ξ j and η j are usually defined by the relations ξ j ( ~q, ~p ) = lim ∆ → (cid:18) ∂∂ ∆ F j ( ~q, ~p ; ∆) (cid:19) , η j ( ~q, ~p ) = lim ∆ → (cid:18) ∂∂ ∆ G j ( ~q, ~p ; ∆) (cid:19) . (2.8)The group transformation (2.5) can be written using the step ∆ and the evolutiongenerator as g ∆ = exp(∆ g ) . (2.9)The group transformation (2.5) and its Lie algebra (2.7) must satisfy many con-ditions. One of them is that they must leave the integrals of motion I k = I k ( ~q, ~p )invariant. This consequence can be expressed as g ◦ I k = 0 mod I k = 0 , (2.10)which is the condition for the Lie algebra g to be a symmetry of the algebraic setof integrals of motion. One can look e.g. in [24] for details on symmetries of sets ofalgebraic equations.In addition, transformation (2.5) must be canonical. We require that the samerelations as in (2.3) must be satisfied regardless of the advance in time, that is { Q i , P j } = δ ij , { Q i , Q j } = { P i , P j } = 0 , i, j = 1 , ..., N. (2.11)For the mixed bracket { Q i , P j } , we have δ ij = { Q i , P j } = { F i ( ~q, ~p ; ∆) , G j ( ~q, ~p ; ∆) } = { exp(∆ g ) ◦ q i , exp(∆ g ) ◦ p j } . (2.12)However, working in the Lie-group formalism, with the exponentials, is much harderthan working in the Lie-algebra formalism, so we want to decompose the exponentialand want to know what are the implications of (2.12) on the vector field g . From oneside, the composition of two canonical transformations remains canonical. From theother side, the group transformation can be seen as applying n times the infinitesimaldeformation 1 + ∆ n g and taking the limit of n to infinity. Hence, if the infinitesimal de-formation is canonical, so is the group transformation. This implies that the conditionsfrom equation (2.12) are equivalent to δ ij = (cid:26)(cid:18) n g (cid:19) ◦ q i , (cid:18) n g (cid:19) ◦ p j (cid:27) = (cid:26) q i + ∆ n ξ i , p j + ∆ n η j (cid:27) = { q i , p j } + ∆ n ( { q i , η j } + { ξ i , p j } ) + O (cid:18) n (cid:19) = δ ij + ∆ n (cid:18) ∂η j ∂p i + ∂ξ i ∂q j (cid:19) + O (cid:18) n (cid:19) . ∂η j ∂p i + ∂ξ i ∂q j = 0 (2.13)and similarly for the relations { Q i , Q j } and { P i , P j } : ∂ξ j ∂p i − ∂ξ i ∂p j = 0 , ∂η j ∂q i − ∂η i ∂q j = 0 . (2.14)The general solution to the canonical conditions (2.13) and (2.14) is ξ j ( ~q, ~p ) = ∂h ( ~q, ~p ) ∂p j , η j ( ~q, ~p ) = − ∂h ( ~q, ~p ) ∂q j , (2.15)where h ( ~q, ~p ) is an arbitrary function of the phase-space coordinates.Combining the symmetry condition (2.10) with the canonical condition (2.15), wesee that the function h must Poisson-bracket commute with all the integrals of motion,i.e. g ◦ I k = N X j =0 (cid:18) ∂h∂p j ∂I k ∂q j − ∂h∂q j ∂I k p j (cid:19) = { I k , h } = 0 mod I k = 0 . (2.16)In 1 dimension, it is trivial to see that the only arbitrary function that can commutewith the Hamiltonian H is a function of the Hamiltonian. Hence, the vector field takesthe form g = h ′ ( H ) (cid:18) ∂H∂p ∂∂q − ∂H∂q ∂∂p (cid:19) = h ′ ( H ) (cid:18) ˙ q ∂∂p + ˙ p ∂∂p (cid:19) = h ′ ( H ) ddt , which is equivalent to the symmetry of translation in time multiplied by a constant h ′ ( H ). In fact, by setting h = H , we can recover the time evolution of the system, theequations of motion, by going back to the group transformation, i.e. dq ξ ( q , p ) = d ∆ = dp η ( q , p )or more explicitly dq d ∆ = ∂H∂p = ξ ( q , p ) , dpd ∆ = − ∂H∂q = η ( q , p ) , dq dp = − ∂H∂p ∂H∂q , (2.17)where the group parameter ∆ acts as the time. Thus, the infinitesimal generator ofevolution in time takes the form g = ∂H∂p ∂∂q − ∂H∂q ∂∂p , such that g ◦ φ = { φ, H } . (2.18)5n higher dimension, the argument is less straightforward. For maximally superin-tegrable systems, only the Hamiltonian can Poisson-bracket commute with all the in-tegrals of motion, hence the derivation is similar, but with more variables to consider.For non-maximally superintegrable systems, some integrals of motion may Poisson-bracket with all other integrals, however, only the Hamiltonian will ensure commuta-tion, recover the time-evolution in the form of the equations of motion and represent atime-translation symmetry. Therefore, the infinitesimal generator of the group trans-formation (2.5) is given uniquely by g = N X j =1 (cid:18) ∂H∂p j ∂∂q j − ∂H∂q j ∂∂p j (cid:19) , g ◦ φ = { φ, H } (2.19)and will be called the (time-)evolution generator. It represents the flow of the Hamilto-nian vector field and is already know in the literature, see e.g. [24]. Here, we approachedit without using total-derivatives in time, which will be useful in the next section toavoid the definition of discrete time-derivative. A very similar derivation can be done in the discrete case, but problems arise whenone wants to take the limit of ∆ to 0. Hence, we need to use a different approach in thediscrete case when the limit ∆ → g − ◦ g ∆ = exp( − ∆ g ) ◦ exp(∆ g ) = exp((∆ − ∆) g ) = I, (2.20)we can see that the identity exists and is well-defined without using the limit. Forequations (2.8), we use the alternative definitions: ξ j ( ~q, ~p ) = exp( − ∆ g ) ◦ ∂∂ ∆ F j ( ~q, ~p ; ∆) , (2.21) η j ( ~q, ~p ) = exp( − ∆ g ) ◦ ∂∂ ∆ G j ( ~q, ~p ; ∆) , (2.22)which are equivalent in the continuous case, but not in the discrete case. It can beseen as applying the group transformation, then differentiating with respect to ∆ andapplying the inverse transformation. If one uses the limit ∆ → g = N X j =1 (cid:18) ∂H∂p j ∂∂q j − ∂H∂q j ∂∂p j (cid:19) , g ◦ φ = { φ, H } , (2.23)which is the same as in the continuous case. Hence, the exact scheme of evolution ofthe system can be obtained by returning to the group transformation. However, we saw6hat getting back to the group transformation is equivalent to solving the equationsof motion, which is not necessarily an easy task. This approach does not help to solvethe equations of motion, but it provides with a guide for the evolution of the system.In the discrete case, interpolations between the points are not physically interesting.The continuous trajectory generated by the Lie group is to be considered a tool, nota real behaviour. This tool allows us to have an arbitrary mesh, which may be usefulto jump over singularities.Overall, the group transformation g ∆ can be seen as a transformation of the initialconditions at t = t to new“initial conditions” at t = t + ∆. By applying repetitivelythe group transformation, we can map the solution or a subset of it.
3. Exact schemes for integrable systems
Let us consider that the system (2.1) is integrable, i.e. there exist at least N integralsof motion I k that are functionally independent and in involution with respect to thePoisson bracket (2.2). According to the Liouville–Arnold theorem [2, 17], integrablesystems possess a distinguished set of coordinates called the action-angle coordinates.The action-angle coordinates can be obtained via a canonical transformation wherethe associated equations of motion take the form dz k dt = ν k ( ~I ) , dI k dt = 0 , k = 1 , ..., N. (3.1)Even if the ν k are functions of I k , they can be treated as constants since the I k areconstant over time. Moreover, the Liouville-Arnold theorem states that the trajectoriesin the phase space are diffeomorphic to an N -dimensional torus under some additionalconditions. However, we will ignore this geometric interpretation and weaken the con-strains on the system and its solutions. Hence, by the action-angle coordinates, wesolely refer to the system of coordinates leading to the equations of motion (3.1).In the action-angle coordinates, the evolution generator is straightened to g = N X k =1 ν k ( ~I ) ∂∂z k , (3.2)for which the associated group transformation is z k → z k + ν k ∆ , I k → I k . (3.3)The action-angle coordinates are a particularly good choice for discretization. Thiscomes from the fact that N integrals of motion will be preserved in the discretizationby definition. Furthermore, since the frequencies ν k are treated as constants, there isno arbitrarity in the choice of the discretization scheme, conversely with the choicemade in the traditional methods, such as Runge–Kutta methods. Hence, if one knowsthe algebraic transformations between the original phase-space coordinates and theaction-angle coordinates, then one can go to the action-angle coordinates to discretizeand come back to the original system to obtain an exact scheme.7 .1. Method using a generating function of type 2 We assume that N integrals of motion in involution are known explicitly. If the integralsof motion are not known, there exist many ways to find them. There is a tremendousnumber of articles in the literature trying the classify all integrable systems, see e.g.[4, 19, 21] and references therein. One could also try to find them by brute force orusing physical intuition. Symbolic softwares, such as Maple [18], are able to find insome cases Lie point symmetries of a system of differential equations. By combinatingit with Noether’s theorem [23] (making a link between symmetries and conservationlaws), one can find integrals of motion.By performing the following steps, one should be able (at least in theory) to obtainthe time-discretized version of the equations of motion associated with the Hamiltoniansystem. Here, we use a generating function to get the relations between the action andthe angle variables, but other methods exist. In addition, one should note that theaction-angle coordinates are not unique. One can always use a different function ofthe same integral and obtain different coordinates, which will lead to equivalent results. Steps: (1) Assign the commuting integrals of motion I k as the action coordinates, i.e. I k = I k ( ~q, ~p ) . (3.4)(2) From equations (3.4), solve the momenta p k in terms of the generalized coordi-nates q k and the action coordinates I k , i.e. p k = s k ( ~q, ~I ) . (3.5)(3) Find a generating function K of second type satisfying the system of partialdifferential equations ∂K ( ~q, ~I ) ∂q k = s k ( ~q, ~I ) . (3.6)(4) By substituting the solution of the generating function K into the equations z k = θ k ( ~q, ~I ) = ∂K ( ~q, ~I ) ∂I k , (3.7)get the relations for the angle coordinates z k .(5) Solve the inverse transformation of θ k to get the q k in terms of the action-anglecoordinates, i.e. q k = r k ( ~z, ~I ) . (3.8)(6) Calculate the frequencies ν k using the relations of the angle coordinates z k andthe original equations of motion, i.e. ν k = N X j =1 ∂θ k ( ~q, ~I ) ∂q j ∂H ( ~q, ~p ) ∂p j (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) t = t (3.9)87) Calculate the constant values of the integrals of motion, i.e. γ k = I k ( ~q, ~p ) (cid:12)(cid:12)(cid:12) t = t (3.10) Results:
The advance-time original coordinates Q k , P k can be expressed using thecurrent-time original coordinates ~q, ~p as Q k = r k ( θ ( ~q, ~γ ) + ν ∆ , ..., θ N ( ~q, ~γ ) + ν N ∆ , γ , ..., γ N ) , (3.11) P k = s k ( θ ( ~q, ~γ ) + ν ∆ , ..., θ N ( ~q, ~γ ) + ν N ∆ , γ , ..., γ N ) , (3.12)where ∆ is the time step. The results remain true regardless of the method used tofind the action-angle coordinates. Let us consider the 1-dimensional harmonic oscillator given by the Hamiltonian H = ρ m + k ζ , (3.13)where ρ ( τ ) is the momentum and ζ ( τ ) is the position in time τ . The equations ofmotion are given by ˙ ζ = ρm , ˙ ρ = − kζ. (3.14)In order to simplify the notation, we will absorb the mass m and the spring constant k using the (extended) canonical transformation [1] ζ ( τ ) = 1 √ k x ( t ) , ρ ( τ ) = √ m p ( t ) , τ = r mk t, such that the new Hamiltonian becomes H = p + x dxdt = ∂H∂p = p, dpdt = − ∂H∂x = − x. (3.16)The analytic solution of this system is well-known and given by a rotation transfor-mation in the phase space x ( t ) = x cos ( t ) + p sin ( t ) , (3.17) p ( t ) = p cos ( t ) − x sin ( t ) , (3.18)where x and p represent the initial position and momentum.9 tep 1: We set H ( x, p ) = E to be the integral of motion used, i.e. the action coordinate. Step 2:
We can solve p in terms of x and E , i.e. p = ǫ p E − x , ǫ = 1 . (3.19) Step 3:
We can obtain the generating function by integration, K ( x, E ) = Z pdx = Z ǫ p E − x dx = ǫ x p E − x + ǫE arctan (cid:18) x √ E − x (cid:19) . Step 4:
The angle coordinate θ is given by θ = ∂K∂E = ǫ arctan (cid:18) x √ E − x (cid:19) = arctan (cid:18) xp (cid:19) , (3.20)together with the action variable E = p + x . (3.21) Step 5:
From equations (3.20) and (3.21), we can see that this transformation is linked withthe polar coordinates, i.e. by inverting the transformation, we get x = √ E sin( θ ) , p = √ E cos( θ ) . (3.22) Step 6:
To obtain the frequency of the system, one can check the equations of motion in theaction-angle coordinates, which take the form dEdt = 0 , dθdt = 1 . (3.23) Step 7:
The integral of motion is a constant given by E = p + x t = 0. Result:
Therefore, after some algebraic manipulations, we get that the exact scheme using(3.11-3.12) is given by X = x cos(∆) + p sin(∆) , P = p cos(∆) − x sin(∆) . (3.25)In addition, the discrete equations of motion take the form X − x ∆ = x (cos(∆) −
1) + p sin(∆)∆ , (3.26) P − p ∆ = p (cos(∆) − − x sin(∆)∆ . (3.27)By taking the limit ∆ →
0, we get back the equations of motion of the continuous case,which makes the scheme consistent. The discretization scheme of the one-dimensionalharmonic oscillator corroborates the results found in [10] using a different approach.It is also possible the to apply repetitively the transformations (3.25) with a constantstep ∆, such that it is possible to get an equivalent of integration of the equationsmotion, i.e. after applying the transformation n -times (where n is an arbitrary integer),we get after some algebraic transformations x ( n ∆) = x cos( n ∆) + p sin( n ∆) , p ( n ∆) = p cos( n ∆) − x sin( n ∆) , (3.28)where n ∆ can be seen as the time t , i.e. t = lim n →∞ ∆ → n ∆ (3.29)Similar results can be obtained using inhomogeneous step sizes.
4. Applications for non-exact schemes
Let us consider an approximative scheme Q j = F j ( ~q, ~p ; ∆) , P j = G j ( ~q, ~p ; ∆) , (4.1)which is not necessarily a group transformation with the parameter ∆. We can writethis transformation as ψ ∆ = g ∆ + w ∆ , (4.2)such that the approximated advanced values Q j and P j are Q j = ψ ∆ ◦ q j , P j = ψ ∆ ◦ p j , (4.3)11here g ∆ is the exact evolution transformation, i.e. g ∆ = exp(∆ g ) , g = N X j =1 (cid:18) ∂H∂p j ∂∂q j − ∂H∂q j ∂∂p j (cid:19) , (4.4)and w ∆ is the local error of the step taking the form of a vector field using the current-time variables, w ∆ = N X j =1 α j ( ~q, ~p ; ∆) ∂∂q j + β ( ~q, ~p ; ∆) ∂∂p j . (4.5)If the transformation ψ ∆ is consistent, then the error w ∆ can be expressed using∆ -terms and higher-order terms in ∆, that is the Taylor expansion takes the form w ∆ = ∞ X k =2 ∆ k k ! v k , v k = ∂ k w ∆ ∂ ∆ k (cid:12)(cid:12)(cid:12)(cid:12) ∆=0 , v = v = 0 . (4.6)In this formalism, we know ψ ∆ and g , but a priori not g ∆ = exp(∆ g ), w ∆ and v k .(One should note that g ∆ is known using g , but it may be hard to compute explicitly.)The lower-order v k can be computed from the known quantity ψ − ∆ ◦ ∂∂ ∆ ψ ∆ = (exp( − ∆ g ) + w − ∆ ) ◦ (cid:18) g ◦ exp(∆ g ) + ∂w ∆ ∂ ∆ (cid:19) = g + exp( − ∆ g ) ◦ ∂w ∆ ∂ ∆ + w − ∆ ◦ g ◦ exp(∆ g ) + w − ∆ ◦ ∂w ∆ ∂ ∆ . (4.7)By expanding in ∆ on both sides, we get for each coefficient of ∆ the following relations:∆ : ∂∂ ∆ (cid:18) ψ − ∆ ◦ ∂∂ ∆ ψ ∆ (cid:19)(cid:12)(cid:12)(cid:12)(cid:12) ∆=0 = v , ∆ : ∂ ∂ ∆ (cid:18) ψ − ∆ ◦ ∂∂ ∆ ψ ∆ (cid:19)(cid:12)(cid:12)(cid:12)(cid:12) ∆=0 = v − g ◦ v + v ◦ g , ∆ : ∂ ∂ ∆ (cid:18) ψ − ∆ ◦ ∂∂ ∆ ψ ∆ (cid:19)(cid:12)(cid:12)(cid:12)(cid:12) ∆=0 = v − g ◦ v + v ◦ g + 3 v + 3 g ◦ v + 3 v ◦ g ...From the ∆ –equation, we can obtain the second-order error v . Then, we can computethe third-order error v from v and the ∆ –equation. Then, v from ∆ and so on.Therefore, the error of any scheme can be predicted analytically.Being able to know what are the errors on a scheme allows to control with a bet-ter precision the numerical trajectories without even knowing the solution. Here, wepropose two options to improve the numerical results: • Find the associated invariants of the error
An invariant φ of the leading-order error w k will be more precise, that is because w k ◦ φ = 0 , (4.8)12ence the error will be diminished. However, some information would be lostsince the number of functionally independent invariant is always lower than thenumber of variables. • Increase the order of the error
By subtracting the lowest-order error(s) w k to the transformation φ , you can raisethe order of the error, and get a more precise scheme. However, for geometricschemes, to increase the order of the error may break some qualitative properties.A problem arises from the fact that the deformations of variables are only tangentto the solution (unless the variables are the action-angle coordinates). Since the de-formations are tangent to the solution, the lower-order errors tend to spread into thehigher-order terms and it gets harder to interpret correctly the errors. However, forthe leading-order error, this method seems accurate, at least for non-stiff systems. Asproposed in the conclusions, a better way to interpret geometrically the errors wouldhelp in this matter.One should note that in general, the Euler method is equivalent to approximatingthe Lie-group transformation to a linear operator, that is Q j = exp(∆ g ) ◦ q j ≈ (1 + ∆ g ) ◦ q j = q j + ∆ ∂H∂p j (4.9) ⇒ Q j − q j ∆ ≈ ∂H ( ~q, ~p ) ∂p j (4.10)and similarly for P j . One can truncate the exponential at a higher order to generateschemes, however, the results will depend on higher partial derivatives of the Hamil-tonian. The Runge–Kutta method can be obtained by further expanding the partialderivatives using Taylor series [9, 15, 27]. Throughout the rest of this section, we will solely focus on examples of schemes forthe 1-dimensional harmonic oscillator with the variables rescaled to absorb to theparameters of the system, i.e. H = p + x . (4.11)The evolution generator of this Hamiltonian is given by g = p ∂∂x − x ∂∂p . (4.12)To illustrate the numerical results, we consider the trajectory starting at x (0) = 1 and p (0) = 0 until it reaches t = 20. The continuous version is represented by a continuousred line and the discrete version is represented by blue dots, using iteration ∆ = 0 . .1.1. The exact scheme As a first example, we will consider the exact scheme from section 3. The iterationscheme is given by X = x cos(∆) + p sin(∆) , P = p cos(∆) − x sin(∆) , (4.13)which suggests that the discrete equations of motion take the form X − x ∆ = x cos(∆) + p sin(∆) − x ∆ , P − p ∆ = p cos(∆) − x sin(∆) − p ∆ . (4.14)By direct substitution into equation (4.7), we obtain that the error w ∆ is null, asexpected.Figure 1 represents the trajectory in the phase space, Figure 2 represents the evo-lution of x in time and Figure 3 represents the evolution of p in time. Figure 4 showsthe error over time in the phase space, σ ( x, p ) = ( x ( t ) − x ( n ∆)) + ( p ( t ) − p ( n ∆)) , (4.15)such that t = n ∆. In this scheme, the errors come from the approximation of thesoftware. Figure 1. igure 2.Figure 3. igure 4. The discrete equations of motion for the Euler method are X − x ∆ = p, P − p ∆ = − x (4.16)and the iteration scheme is given by X = x + ∆ p, P = p − ∆ x, (4.17)which is equivalent to the truncation of the exponential exp(∆ g ) after the linear termin ∆.By direct substitution into equation (4.7), we obtain ψ − ∆ ◦ ∂ψ ∆ ∂ ∆ = ( p + ∆ x ) ∂∂x + (∆ p − x ) ∂∂p . (4.18)We can calculate the errors, that is v = v = 0 ,v = x ∂∂x + p ∂∂p ,v = p ∂∂x + x ∂∂p ,v = − x ∂∂x − p ∂∂p , ... 16he second-order error v represents a scaling transformation, i.e. x → e ǫ x p → e ǫ p. (4.19)Hence, any function of the ratio xp will be an invariant of the error deformation v .To illustrate the results, Figure 5 is the trajectory in the phase space, Figure 6 isthe evolution of x in time, Figure 7 is the evolution of p in time and Figure 8 is theevolution of the invariant xp in time. In Figure 9 the error over time in the phase space σ ( x, p ) is given by the black curve while the error over time of the invariant σ ( xp ) isgiven by the green curve, σ ( x, p ) = ( x ( t ) − x ( n ∆)) + ( p ( t ) − p ( n ∆)) , (4.20) σ (cid:18) xp (cid:19) = (cid:18) x ( t ) p ( t ) − x ( n ∆) p ( n ∆) (cid:19) (4.21)such that t = n ∆. Figure 5.
We can see in Figure 9 that the ratio xp provides a better approximation than theset of variables x and p . (Of course, the approximation of the invariant is not goodwhen p approaches zero since it hits a singularity, but around those points, one canuse the inverse function px to get a better approximation.) In addition, we can see inFigure 8 that the blue dots are drifting to the right of the curve. This can be explainedby higher-order errors generating an extra time translation.17 igure 6.Figure 7. In this case, if one corrects the scheme by subtracting the errors v , v and v ofthe Euler scheme, we obtain X = x + ∆ p − ∆ x − ∆ p + ∆ x, P = p − ∆ x − ∆ p + ∆ x + ∆ p, (4.22)18 igure 8.Figure 9. which is equivalent to the Runge–Kutta scheme (RK4), i.e. X − x ∆ = p − ∆2 x − ∆ p + ∆ x, P − p ∆ = − x − ∆2 p + ∆ x + ∆ p. (4.23)19 .1.3. The discrete gradient method Let us consider the discrete-gradient method which preserves the Hamiltonian (4.11),i.e. the discrete equations of motion takes the form X − x ∆ = 12 P − p P − p , P − p ∆ = − X − x X − x (4.24)and the iteration scheme is given by X = 4 x + 4∆ p − ∆ x , P = 4 p − x − ∆ p . (4.25)It is interesting to note that for this scheme, the inverse transformation is the sameas inverting the sign of ∆. However, it is not a time-linear Lie-group transformationsince the combination of two iterations is not the iteration of the sum of the steps, e.g. ψ − ∆ ◦ ψ ∆ = 1 , ψ ∆ ◦ ψ ∆ ◦ x = F j ( x, p ; 2∆) ψ ∆ ◦ ψ ∆ ◦ p = G j ( x, p ; 2∆) . (4.26)Applying two iterations with the same step ∆ is equivalent to applying one iterationwith the step − ∆ .By direct substitution into equation (4.7), we obtain ψ − ∆ ◦ ∂∂ ∆ ψ ∆ = (cid:18) (cid:19) (cid:18) p ∂∂x − x ∂∂p (cid:19) . (4.27)and the lower-order terms take the form v = v = v = 0 ,v = − (cid:18) p ∂∂x − x ∂∂p (cid:19) ,v = 2 (cid:18) x ∂∂x + p ∂∂p (cid:19) ,v = 132 (cid:18) p ∂∂x − x ∂∂p (cid:19) , ...We can see that the error is of order 3 and higher and the first term is proportional tothe deformation g . As we can see, the leading-order error is a translation in time, thatis the numerical solution will remain on the trajectory, but it represents a point toofar in time. By computing the solution for greater time, we can see that the graphs ofthe position and momentum in time will slowly drift to the right of the exact curves.To illustrate the results, Figure 10 is the trajectory in the phase space, Figure 11 isthe evolution of x in time and Figure 12 is the evolution of p in time and Figure 13 isthe evolution of H in time. In Figure 14 the error over time in the phase space σ ( x, p )is given by the black curve while the error over time of the invariant σ (2 H ) is given20y the green curve, σ ( x, p ) = ( x ( t ) − x ( n ∆)) + ( p ( t ) − p ( n ∆)) , (4.28) σ (2 H ) = (cid:0) p ( t ) + x ( t ) − p ( n ∆) − x ( n ∆) (cid:1) (4.29)such that t = n ∆. Figure 10.Figure 11. igure 12.Figure 13. igure 14. Knowing that the scheme is a non-linear in time group transformation and that theerrors solely take the form of extra translation in time, we can express the approxi-mative scheme differently, i.e. ψ ∆ = exp ((∆ + W (∆)) g ) , (4.30)where W (∆) is a function of ∆ representing the error on time. Using (4.27) and thenew definition, we can calculate W (∆) to obtain W (∆) = ∆ − (cid:18) ∆2 (cid:19) . (4.31)Hence, by breaking the assumption that ∆ acts as the time t , we can correct thescheme by parametrizing the time by a function of ∆. The resulting scheme will beexact, i.e. X = 4 x + 4∆ p − ∆ x , P = 4 p − x − ∆ p , T = t + 2 arctan (cid:18) ∆2 (cid:19) , where X , P and T are the “advance-time” coordinates in the augmented phase space.23 . Conclusions As a summary, we constructed a formalism allowing us to express solutions of time-independent Hamiltonian systems as deformation of the initial conditions via a Lie-group transformation. We found explicitly the associated Lie algebra, which allows usto compare approximative/numerical schemes with the exact scheme or real solution.This comparison provide information on the amplitude and type of errors.More precisely, in section 2 we constructed the formalism for time-independentHamiltonian systems using Lie-group transformations and an evolution generator tak-ing values into the associated Lie algebra. This formalism considers an implicit advancein time, i.e. using the dependent variables, instead of an explicit advance in time, thatis transforming solely the time. Under the canonical transformation conditions, thealgebraical preservation of all integrals of motion and an equivalent of a time transla-tion, we were able to determine the evolution generator. This vector field generates aLie-group transformation which is consistent with both continuous and discrete ver-sions of time. The mesh of the discretization is left without constraints. One of theadvantages of this method is that we do not require to discretize differential equationsand we look for an algebraic substitute. Nothing is needed to be known about thesystem (except the Hamiltonian and the initial conditions).In section 3, we propose a method to construct exact schemes for time-independentHamiltonian systems that are integrable. These schemes keep the full precision of thecontinuous case with an arbitrary mesh. However, this method also possesses someproblems. Some integrals of motion must be known, which can be a tremendous taskby itself. The method can also hit some computational problems when one has to findthe action-angle coordinates and/or the (inverse) canonical transformations, i.e. togive the relation between the original coordinates and the action-angle coordinates.In section 4, we investigated non-exact schemes using the formalism of section 2.This allows us the determine the errors coming from the numerical schemes. It is possi-ble to correct them or look for functions of the coordinates (invariant of the errors) thatare more precise. In addition, we showed how the Euler method can be derived fromthis formalism. We illustrated these considerations via the 1-dimensional harmonicoscillator. For the Euler method, we were able to correct the scheme to get RK4. Wealso looked into the discrete gradient method which preserves the Hamiltonian.This Lie-algebra formalism can be extended in many directions: for one, providinga similar proof of this formalism for autonomous systems of ordinary differential equa-tions. It would be of great use to generalize the formalism to include time-dependentsystems. Also, it would be interesting to approach perturbation theory using thismethod, that is the first-order error is non-zero. It could be also interesting to break therelation between the group parameter and the time, allowing for implicit parametriza-tion of the spacetime. In a more general way, ordinary differential equations are oftenused as a basis for partial differential equations. It would be interesting to furtherextend this formalism to partial differential equations. Also, it could have some ap-plications with delay equations. Lastly, a geometric study of the errors of differentschemes could be undertaken to investigate the global errors or methods to get exactschemes from non-exact schemes using additional numerical analysis tools.24 cknowledgements
SB was partially supported by postdoctoral fellowships provided by the Fonds deRecherche du Qu´ebec : Nature et Technologie (FRQNT) and the Natural Sciencesand Engineering Research Council of Canada (NSERC). SB would like to thank Li-bor ˇSnobl ( ˇCVUT, Czech Republic) and Adel F. Antippa (UQTR, Canada) for inter-esting discussions on the subject of this paper.
References [1] Antippa AF and Dubois DM (2007) Incursive discretization, system bifurcation, andenergy conservation, J. Math. Phys. Probl`emes Ergodiques de la M´ecanique Classique .(Gauthier–Villars, Paris). See Appendix 26, p.182–185.[3] Bertrand S (2017) On integrability aspects of the supersymmetric sine-Gordon equation,
J. Phys. A: Math. Theor. Symmetriesand integrability of difference equations , CRM Ser. Math. Phys. Springer, Cham.DOI: 10.1007/978-3-319-56666-5 6[7] Bourlioux A, Cyr-Gagnon C and Winternitz P (2006) Difference schemes with pointsymmetries and their numerical tests, J. Phys. A: Math. Gen. SL (2 , R ) invariant equations, J. Nonlin. Math. Phys. Numerical Analysis , 10th edition. (Cengagelearning edition, Boston). ISBN: 9781305253667[10] Cie´sli´nski JL (2011) On the exact discretization of the classical harmonic oscillator equa-tion, J. Diff. Eqs Appl. The Painlev Property . CRM Seriesin Mathematical Physics. (Springer, New York). DOI: 10.1007/978-1-4612-1532-5 10[12] Dirac P (1948)
The Principles of Quantum Mechanics , 3rd edition. (Oxford UniversityPress, Oxford). ISBN: 9780198520115[13] Hairer E, Lubich C and Wanner G (2006)
Geometric Numerical Integration: Structure-Preserving Algorithms for Ordinary Differential equations . Second edition (Springer,Netherlands). DOI: 10.1007/3-540-30666-8[14] Kim P and Olver P (2004) Geometric integration via multi-space, Regular & ChaoticDyn. Zeitschrift f¨ur Mathematik und Physik, J. Phys. A: Math. Theor.
19] Marchesiello A, ˇSnobl L and Winternitz P (2015) Three-dimensional superintegrable sys-tems in a static electromagnetic field, J. Phys. A: Math. Theor. M´ecanique Quantique. (´Editions de Boeck Universit´e, Bruxelles).ISBN: 9782804133153[21] Miller WJr, Post S and Winternitz P (2013) Classical and quantum superintegrabilitywith applications, J. Phys. A: Math. Theor. Solitons in Mathematics and Physics , CBMS-NSF Conference Series inApplied Mathematics . (SIAM, Philadelphia). DOI: 10.1137/1.9781611970227[23] Noether E (1918) Invariante Variationsprobleme, Gott. Nachr. 235–257[24] Olver PJ (2000) Application of Lie Groups to Differential Equations , second edition(Springer, New York) ISBN: 978-0-387-95000-6[25] Rogers C and Schief WK (2002)
B¨acklund and Darboux Transformations: Geometry andModern Applications in Soliton Theory , Cambridge Texts in Applied Mathematics. (Cam-bridge University Press, Cambridge) DOI: 10.1017/CBO9780511606359[26] Rovelli C and Vidotto F (2018)
Covariant Loop Quantum Gravity: An Elementary Intro-duction to Quantum Gravity and Spinfoam Theory . (Cambridge Univ. Press, Cambridge).DOI: 10.1017/CBO9781107706910[27] Runge C (1895) Ueber die numerische Aufl¨osung von Differentialgleichungen,
Mathema-tische Annalen , 167-178. DOI: 10.1007/BF01446807[28] Schwitchtenberg J (2018) Physics from Symmetry , second edition. (Springer, Switzerland).DOI: 10.1007/978-3-319-66631-0, second edition. (Springer, Switzerland).DOI: 10.1007/978-3-319-66631-0