New progress in the inverse problem in the calculus of variations
aa r X i v : . [ m a t h . DG ] D ec New progress in the inverse problem in the calculus of variations
Thoan Do and Geoff Prince ∗ Department of Mathematics and Statistics, La Trobe University,Victoria 3086, Australia.* The Australian Mathematical Sciences Institute,c/o The University of Melbourne,Victoria 3010, Australia.October 20, 2018
Abstract.
We present a new class of solutions for the inverse problem in the calculusof variations in arbitrary dimension n . This is the problem of determining the existenceand uniqueness of Lagrangians for systems of n second order ordinary differential equations.We also provide a number of new theorems concerning the inverse problem using exteriordifferential systems theory (EDS). Concentrating on the differential step of the EDS process,our new results provide a significant advance in the understanding of the inverse problemin arbitrary dimension. In particular, we indicate how to generalise Jesse Douglas’s famoussolution for n = 2. We give some non-trivial examples in dimensions 2,3 and 4. We finishwith a new classification scheme for the inverse problem in arbitrary dimension. The inverse problem in the calculus of variations can be expressed as follows. Given a systemof second-order ordinary differential equations¨ x a = F a ( t, x b , ˙ x b ) , a, b = 1 , . . . , n, the question is whether the solutions of this system are also the solutions of the Euler Lagrangeequations, ddt (cid:16) ∂L∂ ˙ x a (cid:17) − ∂L∂x a = 0 , for some regular Lagrangian L ( t, x b , ˙ x b ). This problem was first proposed by Helmholtz [11] in1887. He considered whether the equations in the form presented were Euler-Lagrange. In thecase of single equations Helmholtz found necessary conditions for this to be true. It is not well-known that Sonin [23] solved this one-dimensional problem the previous year in a more generalform, although Hirsch [12] is credited with the so-called multiplier version of the inverse problem, ∗ Email: [email protected], [email protected] g ab ( t, x c , ˙ x c ), satisfying g ab (¨ x b − F b ) ≡ ddt (cid:18) ∂L∂ ˙ x a (cid:19) − ∂L∂x a . Necessary and sufficient conditions for the existence of a regular Lagrangian, according to Dou-glas [9] and to Sarlet [19], are that this multiplier satisfy g ab = g ba , Γ( g ab ) = g ac Γ cb + g bc Γ ca , g ac Φ cb = g bc Φ ca , ∂g ab ∂ ˙ x c = ∂g ac ∂ ˙ x b , (1)where Γ ab := − ∂F a ∂ ˙ x b , Φ ab := − ∂F a ∂x b − Γ cb Γ ac − Γ(Γ ab ) , and where Γ := ∂∂t + ˙ x a ∂∂x a + F a ∂∂ ˙ x a . These conditions, along with non-degeneracy, are commonly referred to as the
Helmholtz condi-tions . If one or more matrices g ab are found that satisfy these four Helmholtz conditions, thenthere exists one (or more) Lagrangians related to these matrices by the expression, ∂ L∂ ˙ x a ∂ ˙ x b = g ab . Integrating this for a given g ab we see that the related Lagrangian L is only defined up to theaddition of a total time derivative of an arbitrary function of t, x a .The multiplier problem was completely solved by Douglas in 1941 for the two dimensional case(see [9]), that is, a pair of second order equations on the plane. He divided the problem intofour primary cases (I to IV) according to the properties of the matrix Φ ab . The correspondingsolution for higher dimensions, even for dimension 3, remained unsolved until the late nineteennineties when some arbitrary n subcases were elaborated [5, 20, 3].The current attacks on this problem, dating back to the 1980s, involve the creation and use ofvarious differential geometric tools. We offer the reader the following references which give someperspective on these developments [3, 6, 7, 10, 14, 16, 17, 21].The current situation, in the framework of [7], is that the following dimension n situations aresolved in the sense of Douglas.1. Φ is a multiple of the identity. The multiplier is determined by n arbitrary functions eachof n + 1 variables. This is the extension of Douglas’s case I. See [2, 3, 20].2. Φ is diagonalisable with distinct eigenvalues and “integrable eigenspaces”. The multiplier isdetermined by n arbitrary functions each of 2 variables. This is the extension of Douglas’scase IIa1. See [5, 1].3. There are many non-existence results depending on technical conditions on Φ. See [18].In the context of our current paper Anderson and Thompson [3] made a significant breakthroughby applying the method of exterior differential systems (EDS) to the inverse problem. Theyillustrated the effectiveness of the method by many concrete examples. In their paper, however2nly Douglas’s case I where Φ is a multiple of the identity was generalised to arbitrary n .Aldridge [1] and Aldridge et al [2] pursued this EDS approach further using the connection ofMassa and Pagani [15]. While the thesis [1] re-investigated Douglas’ case IIa2 in dimension 2and recovered case IIa1 for arbitrary dimension, the paper [2] focused only on the first step ofEDS process, the so-called differential ideal step.In this paper we give results for the cases where some eigen co-distributions are integrable andsome are not by using exterior differential system (EDS) theory. We provide a number of quitegeneral theorems and propositions concerning the inverse problem via EDS in Section 3.3.2.Among these interesting results, Theorem 3.12 is particularly useful in indicating non-existencecases. In addition, it suggests a scheme to replace the four cases of Douglas in the treatment ofthe arbitrary dimension problem. Up until now it has not been obvious how to do this becauseof the low dimensionality, however we find that when Φ is diagonalisable the integrability of theco-distributions must be considered first and the termination of the differential step second. Ourscheme is detailed in Section 6. In Section 4 we particularly solve the inverse problem for thesystem of second-order ODE in arbitrary dimension of which Φ is diagonalisable with distincteigenvalues and exactly one co-distribution is non-integrable. This can be seen as an extensionof Douglas’s case IIa2 in which, for n = 2, one co-distribution is integrable and one is not. Theresults are given in Theorem 4.1.Of course, in general, Φ will not be diagonalisable over the reals. But when it is, the eigenvalueswill generally be distinct and none of the eigen co-distributions will be integrable. Even in thissituation there are cases where solutions exist. Indeed, it is remarkable that so many of thecases are well populated with variational equations. We will illustrate our own results with anumber of examples in Section 5. We now briefly describe the basics of our geometrical calculus, for more details we refer to thebook chapter [14]. We are analysing a system of second-order ordinary differential equations¨ x a = F a ( t, x b , ˙ x b ) , a, b = 1 , . . . , n, (2)for some smooth F a on a manifold M with generic local coordinates ( x a ). The evolution space E := R × T M has adapted coordinates ( t, x a , ˙ x a ) or ( t, x a , u a ). We construct on E from thesystem of equations (2) a second-order differential equation field (SODE):Γ := ∂∂t + u a ∂∂x a + F a ∂∂u a . (3)The SODE produces on E a nonlinear connection with components Γ ab := − ∂F a ∂u b . The evolutionspace E has a number of natural structures. The contact and vertical structures are combined inthe vertical endomorphism S , a (1,1) tensor field on E , with coordinate expression S := V a ⊗ θ a ,where V a := ∂∂u a are the vertical basis fields and θ a := dx a − u a dt are local contact forms. Itis shown in [6] the first order deformation L Γ S has eigenvalues 0 , −
1. The eigenspacescorresponding to eigenvalue 0 , − Sp { Γ } , the vertical subspace V ( E ) := Sp { V a } of thetangent space and the horizontal subspace H ( E ) := Sp { H a } respectively, where H a = ∂∂x a − Γ ba ∂∂u b .
3o an adapted local basis on E is given by { Γ , V a , H a } with dual basis { dt, ψ a , θ a } where ψ a := du a − F a dt + Γ ab θ b . Following directly from the definitions of Γ , V a and H a given in [6, 17] we note:[Γ , H a ] = Γ ba H b + Φ ba V b , [Γ , V a ] = − H a + Γ ba V b , [ V a , V b ] = 0 , [ H a , V b ] = − (cid:18) ∂ F c ∂u a ∂u b (cid:19) V c = V b (Γ ca ) V c = V a (Γ cb ) V c = [ H b , V a ] , and [ H a , H b ] = R dab V d , where the curvature of the connection is defined by R dab := 12 (cid:18) ∂ F d ∂x a ∂u b − ∂ F d ∂x b ∂u a + 12 (cid:18) ∂F c ∂u a ∂ F d ∂u c ∂u b − ∂F c ∂u b ∂ F d ∂u c ∂u a (cid:19)(cid:19) . Following [22] we briefly introduce vector fields and forms along the projection π : E → R × M .Vector fields along π are sections of the pull back bundle π ∗ ( T ( R × M )) over E . X ( π ) denotesthe C ∞ ( E ) module of such vector fields. In the current context see [13].With X := X ∂∂t + X a ∂∂x a and dt, θ a ∈ X ∗ ( π ) we define the following lifts from X ( π ) to X ( E ): X Γ := dt ( X )Γ , X H := θ a ( X ) H a , X V := θ a ( X ) V a . For an intrinsic treatment of these various lifts see [14].Any vector W ∈ X ( E ) can then be uniquely decomposed as W = ( W Γ ) Γ + ( W H ) H + ( W V ) V where W Γ , W H , W V ∈ X ( π ) with W Γ := dt ( W )( ∂∂t + u a ∂∂x a ) ,W H := dt ( W ) ∂∂t + dx a ( W ) ∂∂x a , W V := ψ a ( W ) ∂∂x a . The dynamical covariant derivative ∇ and the Jacobi endomorphism
Φ that appear in (1) aredefined along the tangent bundle projection through the following formulas[Γ , X V ] = − X H + ( ∇ X ) V , [Γ , X H ] = ( ∇ X ) H + Φ( X ) V . In coordinates, Φ = Φ ab ∂∂x a ⊗ θ b , with Φ ab as defined before. ∇ can be extended to act on forms along the projection and is givenin coordinates by ∇ f = Γ( f ) on functions , ∇ ∂∂x a = Γ ba ∂∂x b , ∇ θ a = − Γ ab θ b . E by imposing some natural require-ments. If we denote this connection by ˆ ∇ , these are that the covariant differentials ˆ ∇ dt, ˆ ∇ S andˆ ∇ Γ are all zero and that the vertical sub-bundle is flat. In coordinates, they are:ˆ ∇ Γ Γ = 0 , ˆ ∇ Γ H a = Γ ba H b , ˆ ∇ Γ V a = Γ ba V b , ˆ ∇ H a Γ = 0 , ˆ ∇ H a H b = ∂ Γ ca ∂u b H c , ˆ ∇ H a V b = ∂ Γ ca ∂u b V c , (4)ˆ ∇ V a Γ = 0 , ˆ ∇ V a H b = 0 , ˆ ∇ V a V b = 0 , With any linear connection, ˜ ∇ (not to be confused with the dynamical covariant derivative),on a manifold M there is an associated shape map and torsion (see [13]). The shape map A Z : X ( M ) → X ( M ), as given in Jerie and Prince [13], is defined as A Z ( ξ ) := ddt (cid:12)(cid:12)(cid:12) t =0 τ − t ( ζ t ∗ ξ ) , where ξ ∈ T x M. where τ t : T x M → T ζ t ( x ) M is the parallel transport map, defining along the flow { ζ t } .More useful representations of this shape map on M are A X Y = ˜ ∇ X Y − [ X, Y ] ,A X Y = ˜ ∇ Y X + T ( X, Y )where
X, Y ∈ X ( M ) and the torsion is defined as usual by T ( X, Y ) := ˜ ∇ X Y − ˜ ∇ Y X − [ X, Y ] . The torsion, ˆ T , of the Massa and Pagani connection captures all the important properties of theSODE Γ: ˆ T (Γ , V a ) = H a , ˆ T (Γ , H a ) = − Φ ba V b , ˆ T ( V a , V b ) = 0 , ˆ T ( V a , H b ) = 0 , ˆ T ( H a , H b ) = − R cab V c . The commutators of the vector fields on E can be written in terms of connection ˆ ∇ and theshape map as [Γ , X V ] = ˆ ∇ Γ X V − A Γ ( X V ) , (5a)[Γ , X H ] = ˆ ∇ Γ X H − A Γ ( X H ) , (5b)[ X V , Y V ] = ˆ ∇ X V Y V − ˆ ∇ Y V X V , (5c)[ X V , Y H ] = ˆ ∇ X V Y H − ˆ ∇ Y H X V , (5d)[ X H , Y H ] = ˆ ∇ X H Y H − ˆ ∇ Y H X H + R ( X H , Y H ) . (5e)It can be seen that ˆ ∇ Γ and A Γ replace the role of ∇ and Φ respectively.The curvature tensor R in (5e) is a (1,2) tensor field on E defined as follows R := R cab ( θ a ∧ θ b ) ⊗ V c . We will make no notational distinction between Φ acting along the tangent bundle projectionand acting as an endomorphism on E as Φ = Φ ba V b ⊗ θ a . In the next section we briefly sketch the ideas of the exterior differential systems approach,specifically in the context of the inverse problem.5
Helmholtz conditions and the EDS process
For a given regular Lagrangian L , there is a unique vector field, called the Euler-Lagrange field ,Γ on E such that Γ dθ L = 0 and dt (Γ) = 1 . where θ L is the Poincar´e-Cartan 1-form , θ L := Ldt + dL ◦ S = Ldt + ∂L∂u a θ a . This vector field is a SODE, and the equations satisfied by its integral curves are the Euler-Lagrange equations for L . By careful observation of the properties of the Cartan 2-form dθ L thefollowing theorem from [6] gives a transparent geometric version of the Helmholtz conditions. Theorem 3.1. [6] Given a SODE Γ , the necessary and sufficient conditions for there to beLagrangian, whose Euler-Lagrange field is Γ , are that there should exist a 2-form Ω satisfying Ω is of maximal rank, i.e., ∧ n Ω = 0 , (6)Ω( V a , V b ) = 0 , ∀ V a , V b ∈ V ( E ) , (7)Γ Ω = 0 , (8) d Ω = 0 . (9)We will briefly show how the Helmholtz conditions in (1) arise from this theorem as follows.In Crampin, Prince and Thompson [6] it is shown by using L r Ω = 0 (from (8) and (9)) thatΩ = g ab ψ a ∧ θ b , | g ab | 6 = 0 . (10)The condition (9) gives d Ω( X, Y, X ) = 0 for all
X, Y, Z ∈ { Γ , V a , H b } . (11)Applying the formula for the exterior derivative of a 2-form Ω, d Ω( X, Y, Z ) = X cyclic X,Y,Z ( X (Ω( Y, Z )) − Ω([
X, Y ] , Z )) , (12)to (11) we find that d Ω( V a , V b , V c ) = 0 is trivial and d Ω(Γ , V a , V b ) = 0 ⇔ g ab = g ba ,d Ω(Γ , H a , H b ) = 0 ⇔ g ac Φ cb = g bc Φ ca , (13) d Ω(Γ , V a , H b ) = 0 ⇔ Γ( g ab ) − g ac Γ cb − g bc Γ ca = 0 ,d Ω( V a , V b , H c ) = 0 ⇔ ∂g ab ∂ ˙ x c = ∂g ac ∂ ˙ x b . Therefore d Ω = 0 is equivalent to the Helmholtz conditions in (1) since the two remainingconditions in (11) are the consequences of the four conditions in (13) as shown in this contextby Aldridge [1]. 6n this paper we concentrate on the case where the matrix representation, Φ = (Φ ab ), of Φ isdiagonalisable. As in [7] and [21] this corresponds to Douglas’ case I, case IIa or case III. Ourchoice of the basis for X ( E ) to work with is { Γ , X Va , X Ha } , where X Va and X Ha are vertical liftsand horizontal lifts of eigenvectors X a of diagonalisable Φ on the tangent bundle projection.Let X a = X ba ∂∂x b ∈ X ( π ) be eigenvectors of Φ corresponding to eigenvalues λ a , then theirvertical lifts and their horizontal lifts are X Va := X ba V b and X Ha := X ba H b respectively. The liftedeigenforms φ aV and φ aH of eigenforms φ a together with dt form the dual basis { dt, φ aV , φ aH } to the basis { Γ , X Va , X Ha } . This means that we will look for a non-degenerate closed 2-form ω ∈ Σ := Sp { φ aV ∧ φ bH } , that is, ω = r ab φ aV ∧ φ bH , instead of the one in (10). Once the r ab ’sare found, then the multiplier g ab is given by g ab = r cd φ ca φ db , (14)where φ ca and φ db are components of eigenforms φ c and φ d respectively.The commutators identities regarding the basis { Γ , X Va , X Ha } derived from (5a)-(5e) are as fol-lows [Γ , X Vb ] = τ a Γ b X Va − X Hb , (15a)[Γ , X Hb ] = τ a Γ b X Ha + λ b δ ab X Va = τ a Γ b X Ha + λ b X Vb , (15b)[ X Vb , X Vc ] = ( τ aVbc − τ aVcb ) X Va , (15c)[ X Vb , X Hc ] = τ aVbc X Ha − τ aHcb X Va , (15d)[ X Hb , X Hc ] = ( τ aHbc − τ aHcb ) X Ha + φ aV ( R ( X Hb , X Hc )) X Va , (15e)where the τ ’s are defined throughˆ ∇ Γ X Vb = τ a Γ b X Va , ˆ ∇ Γ X Hb = τ a Γ b X Ha , ˆ ∇ X Vb X Vc = τ aVbc X Va , ˆ ∇ X Vb X Hc = τ aVbc X Ha , (16)ˆ ∇ X Hb X Vc = τ aHbc X Va , ˆ ∇ X Hb X Hc = τ aHbc X Ha . We also have A Γ ( X Vb ) = A Γ ( X ab V a ) = X ab H a = X Hb ,A Γ ( X Hb ) = A Γ ( X ab H a ) = − X ab φ ca V c = − λ b X cb V c = − λ b X Vb . We will now calculate the exterior derivatives of the eigenforms φ aV and φ aH by using theidentity dφ ( X, Y ) = X ( φ ( Y )) − Y ( φ ( X )) − φ ([ X, Y ]) and the bracket identities (15a)-(15e) asfollows dφ aV (Γ , X Vb ) = − φ aV ( − X Hb + τ d Γ b X Vd ) = − τ a Γ b ,dφ aV (Γ , X Hb ) = − φ aV ( τ d Γ b X Hd + λ b X Vb ) = − λ b δ ab ,dφ aV ( X Vb , X Vc ) = − φ aV (( τ dVbc − τ dVcb ) X Vd ) = τ aVcb − τ aVbc ,dφ aV ( X Vb , X Hc ) = − φ aV ( τ dVbc X Hd − τ dHcb X Vd ) = τ aHcb ,dφ aV ( X Hb , X Hc ) = − φ aV (( τ dHbc − τ dHcb ) X Hd + φ dV ( R ( X Hb , X Hc )) X Vd )= − φ aV ( R ( X Hb , X Hc )) . dφ aH : dφ aH (Γ , X Vb ) = δ ab , dφ aH (Γ , X Hb ) = − τ a Γ b ,dφ aH ( X Vb , X Vc ) = 0 , dφ aH ( X Vb , X Hc ) = − τ aVbc ,dφ aH ( X Hb , X Hc ) = τ aHcb − τ aHbc . Putting these components together then we get: dφ aV = − τ a Γ b dt ∧ φ bV − λ a dt ∧ φ aH + τ aHcb φ bV ∧ φ cH + τ aVcb φ bV ∧ φ cV (17) − φ aV ( R ( X Hb , X Hc )) φ bH ∧ φ cH ,dφ aH = dt ∧ φ aV − τ a Γ b dt ∧ φ bH + τ aHcb φ bH ∧ φ cH − τ aVbc φ bV ∧ φ cH , (18)We will use the following equivalent two Frobenius integrability conditions on a co-distribution D ⊥ a = Sp { φ aV , φ aH } of (lifted) eigen-forms of Φ . We use the fact that ω a := φ aV ∧ φ aH is acharacterising form for D ⊥ a . dφ aV ≡ dφ aH ≡ φ aV , φ aH ) , (19)equivalently, dω a = ξ aa ∧ ω a (no sum on a ) , ξ aa ∈ V ( E ) i.e., dω a ≡ ω a ) . (20)We note here that for the one-form ξ aa as given in (20), dξ aa ≡ φ aV , φ aH ) . (21)By looking at the expressions for dφ aV from (17) and dφ aH from (18), together with (19) we getthe following proposition. Proposition 3.2.
The necessary and sufficient conditions for an eigen co-distribution D ⊥ a = Sp { φ aV , φ aH } of Φ to be (Frobenius) integrable are: τ a Γ b = 0 , τ aVbc = 0 , τ aHbc = 0 , φ aV ( R ( X Hb , X Hc )) = 0 for all b, c = a. Definition 3.3.
Let D ⊥ a := Sp { φ aV , φ aH } be an eigen co-distribution of Φ . A 1-form α ∈ D ⊥ a is said to be an integrable direction in D ⊥ a if dα = κ ∧ α, (22)for some 1-form κ. It can be seen from the expressions of the exterior derivatives of φ aV in (17) and φ aH in (18)that the 1-forms φ aV may be integrable but φ aH can’t be. Thus we can express an integrabledirection α a in D ⊥ a as α a = φ aV + B a φ aH . Proposition 3.4.
The necessary and sufficient conditions for the existence of an integrable di-rection α a = φ aV + B a φ aH , that is dα a = κ a ∧ α a , in an eigen co-distribution D ⊥ a = Sp { φ aV , φ aH } f Φ are, for b, c = a : τ a Γ b = 0 , (23) τ aHcb − B a τ aVbc = 0 , (24) τ aVcb = τ aVbc , (25) φ aV ( R ( X Hb , X Hc )) + B a ( τ aHcb − τ aHbc ) = 0 , (26)Γ( B a ) = ( B a ) + λ a , (27) X Vb ( B a ) = B a τ aVab − τ aHab , (28) X Hb ( B a ) = φ aV ( R ( X Hb , X Ha )) + B a ( B a τ aVab − τ aHab ) , (29) and κ a is given by κ a =( τ a Γ a + B a ) dt + ( τ aVab − τ aVba ) φ bV + ( B a τ aVab − τ aHba ) φ bH (30)+ ( B a τ aVaa − X Va ( B a ) − τ aHaa ) φ aH . Proof.
Let α a = φ aV + B a φ aH ∈ Sp { φ aV , φ aH } be an integrable direction. By definition 3.3, dα a = κ a ∧ α a . We apply the formula for the exterior derivative of a 1-form: dα a ( X, Y ) = X ( α a ( Y )) − Y ( α a ( X )) − α a ([ X, Y ])to the basis of vector fields { Γ , X Va , X Ha − B a X Va , X Vb , X Hb ; ∀ b = a } . dα a ( X, Y ) is zero for basispairs (
X, Y ) when neither is X Va . This gives, for b, c = a ,0 = dα a (Γ , X Vb ) = − τ a Γ b , dα a (Γ , X Hb ) = B a τ a Γ b , dα a ( X Vb , X Vc ) = τ aVbc − τ aVcb , dα a ( X Hb , X Vc ) = B a τ aVcb − τ aHbc , dα a ( X Hb , X Hc ) = − ( B a ( τ aHbc − τ aH ( B a ) , dα a ( X Vb , X Ha − B a X Va ) = X Vb ( B a ) − B a τ aVab + τ aHab ) , dα a ( X Hb , X Ha − B a X Va ) = X Hb ( B a ) − B a ( B a τ aVab − τ aHab ) − φ aV ( R ( X Hb , X Ha )) . These immediately give the conditions (23)-(29). For the remainder we have: dα a (Γ , X Va ) = τ a Γ a + B a ,dα a ( X Vb , X Va ) = τ aVab − τ aVba ,dα a ( X Hb , X Va ) = B a τ aVab − τ aHba ,dα a ( X Ha − B a X Va , X Va ) = − X Va ( B a ) + B a τ aVaa − τ aHaa . These give the formula for κ a as in (30).Conversely, if conditions (23)-(29) hold, then dα a ( X, Y ) = 0 for all basis pairs (
X, Y ) whenneither is X Va and thus dα a = κ a ∧ α a as expected.9 .2 Jacobi identities in the inverse problem The τ ’s defined in (16) are not independent and we now examine their relations. To this end weapply the Jacobi identity [ X, [ Y, Z ]] + [ Y, [ Z, X ]] + [ Z, [ X, Y ]] = 0to the basis { Γ , X Va , X Hb } . This results in the following identities which are very useful for latercalculations.(We remark in passing that the Bianchi identities for ˆ ∇ are redundant in the presence of theJacobi identities ( see [8]) and we do not consider them here.)Applying this to the triple Γ , X Vb and X Hc we have,[Γ , [ X Vb , X Hc ]] + [ X Vb , [ X Hc , Γ]] + [ X Hc , [Γ , X Vb ]] = 0 . This gives following identitiesΓ( τ aVbc ) = − τ eVbc τ a Γ e − τ aHbc + X Vb ( τ a Γ c ) + τ e Γ c τ aVbe + τ aVec τ e Γ b , (31a)Γ( τ aHcb ) = − τ eHcb τ a Γ e + τ e Γ c τ aHeb − X Hc ( τ a Γ b ) + τ e Γ b τ aHce − X Vb ( λ c ) δ ac (31b)+ λ c ( τ aVcb − τ aVbc ) + λ a τ aVbc + φ aV ( R ( X Hb , X Hc )) . Applying the Jacobi identity to the triple (Γ , X Vb , X Vc ) gives no new identities since the expres-sion that we get from this isΓ( τ aVbc − τ aVcb ) = τ a Γ e ( τ eVcb − τ eVbc )+ τ aHcb − τ aHbc + X Vb ( τ a Γ c ) − X Vc ( τ a Γ b )+ τ e Γ c ( τ aVbe − τ aVeb ) − τ e Γ b ( τ aVce − τ aVec )which can be obtained by using (31a).Applying the Jacobi identity to other triples we get further identities:for (Γ , X Hb , X Hc ) we obtainΓ( φ aV ( R ( X Hb , X Hc ))) = τ e Γ c φ aV ( R ( X Hb , X He )) + τ e Γ b φ aV ( R ( X He , X Hc )) (32a) − τ a Γ e φ eV ( R ( X Hb , X Hc )) + ( λ c − λ a ) τ aHbc − ( λ b − λ a ) τ aHcb + X Hb ( λ c ) δ ac − X HC ( λ b ) δ ab , Γ( τ aHbc − τ aHcb ) = − ( τ eHbc − τ eHcb ) τ a Γ e + τ e Γ c ( τ aHbe − τ aHeb ) − λ c τ aVcb (32b)+ λ b τ aVbc ) + X Hc ( τ a Γ b ) − X Hb ( τ a Γ c ) − τ e Γ b ( τ aHce − τ aHec ) + φ aV ( R ( X Hb , X Hc )) . Substituting Γ( τ aHbc ) and Γ( τ aHcb ) from (31b) into (32b) we have3 φ aV ( R ( X Hb , X Hc )) = X Vb ( λ c ) δ ac − X Vc ( λ b ) δ ab + τ aVbc ( λ c − λ a ) − τ aVcb ( λ b − λ a ) . (33)For the triple ( X Va , X Vb , X Hc ) we have X Va ( τ dVbc ) − X Vb ( τ dVac ) = τ eVac τ dVbe − τ eVbc τ dVae + τ dVec ( τ eVab − τ eVba ) , (34a) X Va ( τ dHcb ) − X Vb ( τ dHca ) = X Hc ( τ dVab − τ dVba ) − τ eVbc τ dHea + τ eVac τ dHeb (34b)+ τ dHce ( τ eVab − τ eVba ) + τ eHca ( τ dVbe − τ dVeb ) − τ eHcb ( τ dVae − τ dVea ) . X Ha , X Vb , X Hc ) we get X Ha ( τ dVbc ) − X Hc ( τ dVba ) = X Vb ( τ dHac − τ dHca ) − τ eVbc ( τ dHae − τ dHea ) (35a) − τ eHcb τ dVea − ( τ eHca − τ eHac ) τ dVbe + τ eVba ( τ dHce − τ dHec ) + τ eHab τ dVec ,X Ha ( τ dHcb ) − X Hc ( τ dHab ) = X Vb ( φ dV ( R ( X Hc , X Ha ))) + τ eVbc φ dV ( R ( X Ha , X He )) (35b) − τ eHcb τ dHae + φ eV ( R ( X Hc , X Ha ))( τ dVbe − τ dVeb )+ τ eHab τ dHce − τ dHeb ( τ eHca − τ eHac ) − τ eVba φ dV ( R ( X Hc , X He )) . For the triple ( X Va , X Vb , X Vc ) we get only one new identity X Va ( τ dVbc − τ dVcb ) + X Vb ( τ dVca − τ dVac ) + X Vc ( τ dVab − τ dVba ) (36)= − ( τ dVae − τ dVea )( τ eVbc − τ eVcb ) − ( τ dVbe − τ dVeb )( τ eVca − τ eVac ) − ( τ dVce − τ dVec )( τ eVab − τ eVba ) . Finally, applying the Jacobi identity to ( X Ha , X Hb , X Hc ) , we have two more identities X Ha ( τ dHbc − τ dHcb ) + X Hb ( τ dHca − τ dHac ) + X Hc ( τ dHab − τ dHba ) (37a)= τ dVea φ eV ( R ( X Hb , X Hc )) + τ dVeb φ eV ( R ( X Hc , X Ha ))+ τ dVec φ eV ( R ( X Ha , X Hb )) − ( τ dHae − τ dHea )( τ eHbc − τ eHcb ) − ( τ dHbe − τ dHeb )( τ eHca − τ eHac ) − ( τ dHce − τ dHec )( τ eHab − τ eHba ) ,X Ha ( φ dV ( R ( X Hb , X Hc ))) + X Hb ( φ dV ( R ( X Hc , X Ha ))) (37b)+ X Hc ( φ dV ( R ( X Ha , X Hb ))) = ( τ eHcb − τ eHbc ) φ dV ( R ( X Ha , X He ))+ ( τ eHac − τ eHca ) φ dV ( R ( X Hb , X He )) + ( τ eHba − τ eHab ) φ dV ( R ( X Hc , X He )) − τ dHae φ eV ( R ( X Hb , X Hc )) − τ dHbe φ eV ( R ( X Hc , X Ha )) − τ dHce φ eV ( R ( X Ha , X Hb )) . The idea of looking for closed 2-forms leads to the use of EDS method for the inverse problem.The EDS references are the book [4] in general and the memoir [3] particularly for the inverseproblem. In the first part of this section we give a brief description of the EDS approach tothe inverse problem. The second part is devoted to significant results regarding the so-calleddifferential ideal step of EDS process.
According to Anderson and Thompson in [3], the EDS process for the inverse problem involvesthree steps. We start with the submodule of 2-forms Σ. The first step, namely the differentialideal step, is to look for a final submodule, Σ f , of Σ that generates a differential ideal. Thesecond step is to create an equivalent linear Pfaffian system for the closed 2-forms, and final stepis to determine the generality of the solution of the problem by using the Cartan-K¨ahler theorem.The differential ideal step is a recursive process which produces from Σ := Σ a sequence ofsubmodules Σ ⊃ Σ ⊃ Σ ⊃ ... . Each stage of the process involves calculating the exterior11erivative of forms belonging to some submodule Σ i and then checking whether these 3-formsbelong to the ideal generated by that submodule, which results in a restriction on the admissible2-forms which form a submodule Σ i +1 and then the process is repeated from this submoduleand so on until a final differential ideal, h Σ f i , is found, i.e. Σ f = Σ f +1 , or the trivial set isreached. If it is impossible to create a maximal rank two-form at any stage during this process,the problem has no regular solution.Suppose that a differential ideal generated by Σ f is found, the next step in the EDS process isto express the problem of finding the closed 2-forms in Σ f as a Pfaffian system. We will give abrief outline of this step, see [3], [14] or [4] for details.Let the differential ideal h Σ f i be generated by the set of 2-forms, not necessary simple, { ¯ ω k } , k ∈{ , ..., d } , and calculate d ¯ ω k = ¯ ξ kh ∧ ¯ ω h . where the ¯ ξ ij ’s are now known one-forms.Since ω ∈ Σ f = Sp { ¯ ω k } , dω = β j ∧ ¯ ω j , and because we are looking for those ω ’s such that dω = 0 the next step is to find all possible d -tuples of one forms ( ρ Ak ) = ( ρ A , ..., ρ Ad ) such that ρ Ak ∧ ¯ ω k = 0. Once all the d -tuples ρ Ak A ∈ { , ..., e } have been found, the inverse problembecomes that of finding the functions r k which satisfy the Pfaffian system of equations: dr k + r h ¯ ξ hk + p A ρ Ak = 0 , (38)for some arbitrary functions p A . The freedom in the choice of these p A ’s will be then exploitedin the final part of the EDS procedure.The general method for finding the solution for this problem in EDS is to define an extendedmanifold N = E ⊗ R d ⊗ R e with co-ordinates { t, x a , u b , r k , p A } , a, b ∈ { , ..., n } , k ∈ { , ..., d } , A ∈{ , ..., e } and look for 2 n + 1 dimensional submanifolds that are sections over E and on whichthe one forms σ k := dr k + r h ¯ ξ hk + p A ρ Ak are zero.To find these manifolds, σ k are considered constraint forms for some distribution on N , and theproblem becomes that of looking for integral manifolds arising from this distribution. To findthese integral manifolds, we choose a basis of forms on N , { α m , σ k , π A } where { α m } are a pulledback basis for E , π A := dp A , and σ k as defined above completes the basis.The condition that we have sections over E is that the form α ∧ ... ∧ α n +1 be non-zero on the 2 n + 1 dimensional integral manifolds given by the constraint forms.In the remainder of this section, we will give a brief outline of the process of finding the generalityof the solutions to this last problem, see [3] or [4] for details.According to [3], to determine the existence and generality of the solutions to (38), we calculatethe exterior derivatives dσ k modulo the ideal generated by the forms σ k dσ k ≡ π ik ∧ α i + t ijk α i ∧ α j (mod σ ) (39)12here π ik are some linear combinations of π A . As dσ k expands with no dp A ∧ dp B terms, thesystem is quasi-linear.As we want the system to be a section over E , i.e α ∧ ... ∧ α n +1 = 0 on the integral manifolds,we need to absorb all the α i ∧ α j terms into the π ik ∧ α i terms. This is done by changing thebasis forms π A to ¯ π A := π A − l jA α j . If any of the α i ∧ α j terms can not be absorbed, then askingfor dσ k ≡ σ ) is incompatible with the independence condition and therefore there is nosolution.Once the α i ∧ α j terms have been removed, the system dσ k ≡ π ik ∧ α i (mod σ ) (40)is used to create the tableau Π from which the Cartan characters, s , s , ..., s k , can be calculatedallowing us to apply the Cartan test for involution.Π = α α . . . α n σ π π . . . π n σ π π . . . π n ... ... ... ... σ d π d π d . . . π nd The first character s is the number of independent one forms that can be chosen from first col-umn of Π. s is the number of independent forms in the second column that are also independentof all forms in the first column. This is repeated for s and onwards until all the independentforms are exhausted. In computing the Cartan characters, the basis { α i } is chosen such that s is as large as possible, and s large as possible but less the s , and so on. In particular, s k mustform a non-increasing sequence of integers.Once the Cartan characters are found, the Cartan test for involution is performed.Let t denote the number of ways in which the forms π ik can be modified by using ¯ π A = π A − l jA α j ,without changing (40). That is, t is the dimension of the linear space of e -tuples of one forms( τ , τ , ..., τ e ) of the form: τ A = l jA α j such that a iAk τ A ∧ α i = 0Then, according to Cartan, the differential system (38) is in involution if and only if t = s + 2 s + 3 s + ... + ks k . If the Cartan test fails, then it is necessary to prolong the differential system by differentiatingthe original equations to obtain a new differential system on N , the first jet bundle of localsections of N over M , then repeating the foregoing analysis.Once a sequence of Cartan characters is found that passes the Cartan test, then if s l is thelast non-zero character, the general solution to the differential system (38) will depend on s l arbitrary functions of l variables. 13 .3.2 The differential ideal step It follows from Theorem 3.1 that for a given SODE Γ for which the corresponding Φ is diago-nalisable, the closed 2-form ω that we are seeking must satisfy the algebraic conditions: ω ( X Va , X Vb ) = 0 , ω ( X Ha , X Hb ) = 0Γ ω = 0 , ω ( X Va , X Hb ) = ω ( X Vb , X Ha ) . So we start the EDS process with the module Σ := Sp { ω ab } , where ω ab := ( φ aV ∧ φ bH + φ bV ∧ φ aH ) , ≤ a ≤ b ≤ n , and look for the (final) differential ideal generated by Σ f .Consider a 2-form ω in Σ , that is ω = P a ≤ b r ab ω ab . Calculating the exterior derivative of ω using (17) and (18) gives dω = X a ≤ b dr ab ∧
12 ( φ aV ∧ φ bH + φ bV ∧ φ aH ) − X a ≤ b r ab (cid:20) ( τ a Γ c dt + τ aHdc φ dH + τ aVdc φ dV ) ∧
12 ( φ bV ∧ φ cH + φ cV ∧ φ bH ) − ( τ b Γ c dt + τ bHdc φ dH + τ bVdc φ dV ) ∧
12 ( φ aV ∧ φ dH + φ dV ∧ φ aH )+ 12 ( λ b − λ a ) dt ∧ φ aH ∧ φ bH − φ aV ( R ( X Hd , X Hc )) φ dH ∧ φ cH ∧ φ bH − φ bV ( R ( X Hd , X Hc )) φ dH ∧ φ cH ∧ φ aH (cid:21) ⇒ dω ≡ X a ≤ b r ab (cid:20)
12 ( λ b − λ a ) dt ∧ φ aH ∧ φ bH − φ aV ( R ( X Hd , X Hc )) φ dH ∧ φ cH ∧ φ bH (41) − φ bV ( R ( X Hd , X Hc )) φ dH ∧ φ cH ∧ φ aH (cid:21) (mod h Σ i ) . By looking at (41) we find alternative proofs of the following two propositions (see [2]).
Proposition 3.5.
The differential ideal step finishes at Σ if and only if Φ is a function multipleof the identity.Proof. From (41), the necessary and sufficient conditions for h Σ i to be a differential ideal are r ab ( λ b − λ a ) dt ∧ φ aH ∧ φ bH = 0 , ∀ a < b (no sum) (42)and X a ≤ b r ab ( φ aV ( R ( X Hd , X Hc )) φ dH ∧ φ cH ∧ φ bH + φ bV ( R ( X Hd , X Hc )) φ dH ∧ φ cH ∧ φ aH ) = 0 (43)for all r ab . Hence if Σ generates a differential ideal, then the two conditions are satisfied forall r ab . This implies that for all a, b , λ a = λ b , and thus the matrix Φ is multiple of the identity.Conversely, if Φ is multiple of the identity, then (42) is immediately satisfied and (43) is satisfiedvia the identity (34). 14 roposition 3.6. Suppose that Φ is diagonalisable with distinct eigenvalues and eigenforms φ a . Let Σ be Sp { ω ab } and ω ∈ Σ . Then ω ∈ Σ if and only if ω := P na =1 r a ω aa and the curvaturesatisfies X cyclic dce r d φ dV ( R ( X Hc , X He )) = 0 , for all distinct d, c, e , (no sum on d ). (44) Proof.
Let ω ∈ Σ , that is ω = P a ≤ b r ab ω ab , then ω ∈ Σ if and only if dω ∈ h Σ i . This isequivalent to the two conditions (42) and (43) on r ab .Since λ b − λ a = 0 by assumption, (42) gives r ab = 0 for a < b . We will now use r a instead of r aa and ω a instead of ω aa .The condition (43) becomes X dce r d ( φ dV ( R ( X Hc , X He )) φ cH ∧ φ eH ∧ φ dH = 0 , for distinct d, c and e , which is equivalent to (44).Therefore Σ = Sp { ω a := φ aV ∧ φ aH , a = 1 , . . . , n } satisfying the condition (44) as required.By observation from Proposition 3.5 we have that for the case where Φ is diagonalisable withdistinct eigenvalues we need to process the next differential ideal step to examine whether ornot h Σ i is a differential ideal. However the condition (44) presents a difficulty in this checkingprocess. Now we denote ˜Σ := Sp { ω a , a = 1 , . . . , n } , which does not necessarily satisfy (44), andso Σ ⊆ ˜Σ ⊂ Σ . These results show that for the case where Φ is diagonalisable with distincteigenvalues, ˜Σ is the more effective option with which to start the differential ideal step. Proposition 3.7.
Let Φ be diagonalisable with distinct eigenvalues. Then the necessary andsufficient conditions for ω = P a r a φ aV ∧ φ aH ∈ ˜Σ to have its exterior derivative in the ideal h ˜Σ i are that, for all distinct a, b and c (no sum), r a τ a Γ b + r b τ b Γ a = 0 ,r a ( τ aVbc − τ aVcb ) − r b τ bVca + r c τ cVba = 0 ,r a ( τ aHbc − τ aHcb ) − r b τ bHca + r c τ cHba = 0 , (45) r a φ aV ( R ( X Hc , X Hb )) + r b φ bV ( R ( X Ha , X Hc )) + r c φ cV ( R ( X Hb , X Ha )) = 0 . The last of these is just (44) .Proof.
Let ω = r a φ aV ∧ φ aH ∈ ˜Σ . Then dω ∈ h ˜Σ i ⇔ dω = X k β k ∧ φ kV ∧ φ kH . (46)By observation (46) is equivalent to dω (Γ , X Va , X Vb ) = 0 , dω (Γ , X Ha , X Hb ) = 0 , dω (Γ , X Va , X Hb ) = 0 ,dω ( X Va , X Hb , X Hc ) = 0 , dω ( X Va , X Vb , X Hc ) = 0 , dω ( X Va , X Vb , X Vc ) = 0 , (47) dω ( X Ha , X Hb , X Hc ) = 0 , a, b and c .Applying the formula (12) to the identities in (47), we can see that only the second part thatinvolves the Lie brackets can contribute. Using the bracket relations (15a)-(15e) in the calcula-tion we find that the first, the second and the sixth condition in (47) are identically satisfied.The third condition gives: r a τ a Γ b + r b τ b Γ a = 0 , a = b. The fourth and the fifth condition respectively give r a ( τ aHbc − τ aHcb ) − r b τ bHca + r c τ cHba = 0 ,r a ( τ aVbc − τ aVcb ) − r b τ bVca + r c τ cVba = 0 , for all distinct a, b and c .The remaining condition is, that for all distinct a, b, c and with no sum, r a φ aV ( R ( X Hc , X Hb )) + r b φ bV ( R ( X Ha , X Hc )) + r c φ cV ( R ( X Hb , X Ha )) = 0 . This last condition is simply the condition (44) in Proposition 3.6.
Corollary 3.8.
For diagonalisable Φ with distinct eigenvalues, the necessary and sufficientconditions for ˜Σ to generates a differential ideal are that, for all distinct a, b and c , τ a Γ b = 0 , τ aVbc = 0 (48) Proof. ˜Σ := Sp { φ aV ∧ φ aH : a = 1 , ..., n } generates a differential ideal if and only if theconditions in Proposition 3.7 hold for arbitrary r a . This immediately gives τ a Γ b = 0 , τ aVbc = 0, τ aHbc = 0 and φ aV ( R ( X Hb , X Hc )) = 0 for all distinct a, b and c . But the last two conditions arethe consequences of the first two conditions by Jacobi identities (31a) and (31b) for distinct a, b and c .We note here that if we assume τ a Γ b = 0 for all a = b so that ˆ ∇ Γ X V/Ha = τ a Γ a X V/Ha , then all τ a Γ b can be put equal to zero by re-scaling the eigenvectors of Φ . Thus from now on we have that if˜Σ is a differential ideal, then τ a Γ b = 0 for all a, b .In the next differential ideal steps, we define ˜Σ i +1 := { ω ∈ ˜Σ i : dω ∈ h ˜Σ i i} . Thus ˜Σ is thesubmodule of 2-forms in ˜Σ which further satisfy the conditions in (45) and so ˜Σ ⊆ Σ ⊆ ˜Σ .The relation between the sequences ˜Σ ⊃ ˜Σ ⊃ · · · ⊃ ˜Σ p ⊃ . . . and Σ ⊃ Σ ⊃ · · · ⊃ Σ p ⊃ . . . is as follows. Lemma 3.9. ˜Σ ⊇ Σ ⊇ ˜Σ ⊇ Σ ⊇ · · · ⊇ ˜Σ p ⊇ Σ p ⊇ . . . . Proof.
We have established this for p = 1. Suppose that it is true for p = k , now we will provethat it is true for p = k + 1, that is ˜Σ k +1 ⊇ Σ k +1 ⊇ ˜Σ k +2 . Let ω ∈ Σ k +1 , that is ω ∈ Σ k ⊆ ˜Σ k and dω ∈ h Σ k i ⊆ h ˜Σ k i and so ω ∈ ˜Σ k +1 .Now let ω ∈ ˜Σ k +2 , that is ω ∈ ˜Σ k +1 ⊆ Σ k and dω ∈ h ˜Σ k +1 i ⊆ h Σ k i and so ω ∈ Σ k +1 .An immediate result from Lemma 3.9 is Corollary 3.10. If ˜Σ p generates a differential ideal but ˜Σ p − does not, then either Σ p − or Σ p generates a differential ideal, and moreover either ˜Σ p = Σ p − or ˜Σ p = Σ p respectively. roof. As a result of Proposition 3.7 it is true for p = 1, that is, if h ˜Σ i is a differential ideal then˜Σ = Σ . For p > p − does not, by assumption, generate a differential ideal, there aretwo cases for Σ p − . The first case, if Σ p − generates a differential ideal then Σ p − = Σ p and soΣ p − = ˜Σ p = Σ p by Lemma 3.9. The second case, if Σ p − does not generate a differential ideal,then h Σ p i is a differential ideal because h ˜Σ p i is a differential ideal and ˜Σ p = Σ p = ˜Σ p +1 .The following proposition indicates the sufficient condition for the nonexistence of regular solu-tions. This can be used to exclude the cases where there are no regular solutions. Proposition 3.11.
Suppose Φ is diagonalisable with distinct eigenvalues. If there is some ω c missing in the final submodule Σ f , that is ω ( X Vc , X Hc ) = 0 for all ω ∈ Σ f , then there is noregular solution to the inverse problem.Proof. Let ω ∈ Σ f . We have Γ ω = 0. If ω c is missing in Σ f , then X Vc ω = 0 and X Hc ω = 0.It then follows that ω has kernel of dimension greater than one. Theorem 3.12.
Let Φ be diagonalisable with distinct eigenvalues. Suppose there are q non-integrable eigen co-distributions. If the sequence h ˜Σ i , ..., h ˜Σ q i does not contain a differentialideal then there is no non-degenerate solution.Proof. Suppose that the eigen co-distributions are ordered so that the first q are non-integrable.Firstly, if h ˜Σ q i is not a differential ideal, then no earlier h ˜Σ p i can be a differential ideal. Noweach of the n − q integrable ω b := φ bV ∧ φ bH has remained in ˜Σ q since dω b = ξ bb ∧ ω b . However h ˜Σ q i is not a differential ideal so that dim ( ˜Σ q ) > n − q . Now dim ( ˜Σ p +1 ) < dim ( ˜Σ p ) for p < q + 1and so dim ( ˜Σ q ) ≤ n − ( q − dim ( ˜Σ q ) = n − q + 1. But h ˜Σ q i is not a differentialideal by assumption and hence dim ( ˜Σ q +1 ) = n − q and so ω , ..., ω q are missing and no solutionexists.We remark that it is not the case that the number of non-integrable co-distributions alwaysmatches the terminating differential ideal step. Example 5 in section 5 demonstrates this for n = 3. Corollary 3.13.
Let Φ be diagonalisable with distinct eigenvalues. If the final submodule is onedimensional and ˜Σ f = Sp { r a ω a , a = 1 , . . . , n, r a = 0 } , then no eigen co-distributions of Φ areintegrable. For the sake of completeness we reproduce the following theorem about the limiting cases.
Theorem 3.14. (see [18]) Suppose that the final differential ideal is generated by a one dimen-sional submodule Σ f = Sp { ˜ ω } , for non-degenerate ˜ ω. That is, there exists µ such that d ˜ ω = µ ∧ ˜ ω, ∧ n ˜ ω = 0 . Then h ˜ ω i contains a closed, non-degenerate two-form if and only if dµ = 0 . We characterise non-integrable eigen co-distributions in the next theorem.17 heorem 3.15.
Let Φ be diagonalisable with distinct eigenvalues and suppose ˜Σ = Sp { φ bV ∧ φ bH , b = 1 , ..., n } generates a differential ideal. Suppose further that Sp { φ aV , φ aH } is a non-integrable eigen co-distribution of Φ for some a . Then1. there exists at least one non-zero τ aVbb for some b = a ;2. let τ aVbb , τ aHbb = 0 for some b = a then ¯ α a = φ aV + ¯ B a φ aH (no sum) is an integrable directionin Sp { φ aV , φ aH } if and only if ¯ B a = τ aHbb τ aVbb for all such b , (49) and X Vb ( ¯ B a ) = ¯ B a τ aVab − τ aHab , a = b. (50)
3. Let τ aHbb = 0 for all b = a then φ aV is an integrable direction if and only if τ aHab = 0 for all b = a .Proof.
1. Since the co-distribution Sp { φ aV , φ aH } is non-integrable and ˜Σ is a differentialideal, there is at least one τ aVbb = 0 by Proposition 3.2.2. Suppose that ¯ α a = φ aV + ¯ B a φ aH is an integrable direction in Sp { φ aV , φ aH } , that is(23)-(29) hold for B a = ¯ B a . Then from (24) with c = b we get¯ B a = τ aHbb τ aVbb , and (50) is exactly (28).Conversely, we show that if ¯ B a = τ aHbb τ aVbb , and X Vb ( ¯ B a ) = ¯ B a τ aVab − τ aHab , then (23)-(29) holdfor B a = ¯ B a as follows.The conditions (23),(24),(25) and (26) hold for B a = ¯ B a from Corollary 3.8.To establish (27) we note that τ aHbc = − Γ( τ aVbc ) for all b, c = a from Jacobi identity (31a) and τ a Γ b = 0 for all a, b , andΓ( τ aHbb ) = λ a τ aVbb from Jacobi identity (31b) together with the conditions in Corollary 3.8.We have Γ( ¯ B a ) = Γ( τ aHbb τ aVbb ) for τ aVbb = 0= Γ( τ aHbb ) − ¯ B a Γ( τ aVbb ) τ aVbb = λ a + ( ¯ B a ) , so the condition (27) holds.The condition (50) is exactly (28) and implies the condition (29) for B a = ¯ B a , X Hb ( ¯ B a ) = φ aV ( R ( X Hb , X Ha )) + ¯ B a ( ¯ B a τ aVab − τ aHab ) as follows. X Hb ( ¯ B a ) = − [Γ , X Vb ]( ¯ B a ) = X Vb (Γ( ¯ B a )) − Γ( X Vb ( ¯ B a ))= X Vb (( ¯ B a ) + λ a ) − Γ( ¯ B a τ aVab − τ aHab )= 2 ¯ B a ( ¯ B a τ aVab − τ aHab ) + X Vb ( λ a ) − (( ¯ B a ) + λ a ) τ aVab + ¯ B a τ aHab + Γ( τ aHab )18y identity (31b) and Corollary 3.8,Γ( τ aHab ) = φ aV ( R ( X Hb , X Ha )) + λ a τ aVab − X Vb ( λ a )Substituting this into X Hb ( ¯ B a ) above and then simplifying we get X Hb ( ¯ B a ) = φ aV ( R ( X Hb , X Ha )) + ¯ B a ( ¯ B a τ aVab − τ aHab )as required.3. If φ aV is integrable then dφ aV = µ a ∧ φ aV . By looking at (17) along with the assumptionof the differential ideal, h ˜Σ i , we have τ aHab = 0 for b = a .Conversely, using identity (31b) and Corollary 3.8 to prove that φ aV ( R ( X Hb , X Ha )) = 0and so φ aV ( R ( X Ha , X Hb )) = 0 because R is skew, and all other terms apart from the form µ ∧ φ aV in the right hand side of (17) go because of Corollary 3.8 and the assumptions ofthe theorem. Hence τ aHab = 0 together with the stated assumptions and (17) imply that φ aV is an integrable direction.We now examine the consequences of more than one non-zero τ aVbb . Corollary 3.16.
Let Φ be diagonalisable with distinct eigenvalues and suppose ˜Σ = Sp { φ bV ∧ φ bH , b = 1 , ..., n } is a differential ideal. Suppose that for some a , Sp { φ aV , φ aH } is a non-integrable co-distribution of Φ . Suppose further that there exist at least two τ aVb i b i = 0 for b i = a .Then ¯ α a = φ aV + ¯ B a φ aH is an integrable direction in Sp { φ aV , φ aH } if and only if ¯ B a = τ aHb i b i τ aVb i b i for each such b i = a . Proof.
If ¯ α a = φ aV + ¯ B a φ aH is an integrable direction, then¯ B a = τ aHb i b i τ aVb i b i for all such b i = a by theorem 3.15 (along with (50)). Conversely, we will show that, with the given ¯ B a , X Vb i ( ¯ B a ) =¯ B a τ aVab i − τ aHab i , b i = a . We note that with the conditions in the Corollary 3.8 we get X Vb i ( τ aVb j b j ) = τ aVb j b j (2 τ b j Vb i b j − τ b j Vb j b i ) − τ aVb j b j τ aVb i a for distinct a, b i , b j (51)from the Jacobi identity (34a), and X Vb i ( τ aHb j b j ) = τ aHb j b j (2 τ b j Vb i b j − τ b j Vb j b i ) − τ aVb j b j τ aHb i a − τ aHb j b j ( τ aVb i a − τ aVab i ) for distinct a, b i , b j (52)from the Jacobi identity (34b). Now we have X Vb i ( ¯ B a ) = X Vb i τ aHb j b j τ aVb j b j ! = X Vb i ( τ aHb j b j ) − ¯ B a X Vb i ( τ aVb j b j ) τ aVb j b j . Substituting X Vb i ( τ aVb j b j ) from (51) and X Vb i ( τ aHb j b j ) from (52) into X Vb i ( ¯ B a ) above and then simpli-fying we get X Vb i ( ¯ B a ) = ¯ B a τ aVab i − τ aHab i , b i = a as required. 19 Douglas’s case IIa2 and an extension
In this section we shall explicitly deal with the inverse problem in arbitrary dimension n given bya diagonalisable Φ with distinct eigenvalues with exactly n − ∇ Φ , Φ] = 0) or case III ([ ∇ Φ , Φ] = 0) in thesense of Douglas’ classification. In n = 2, [ ∇ Φ , Φ] = 0 is equivalent to ˜Σ being a differentialideal. In higher dimensions the problem is rather complicated. In the n = 3 case for instance,[ ∇ Φ , Φ] = 0 does not mean ˜Σ is a differential ideal anymore; and the cases of one non-integrableco-distribution and two non-integrable co-distributions may both be considered for the extensionof Douglas’ case IIa2.Consider a given system of second-order ordinary differential equations¨ x a = F a ( t, x b , ˙ x b ) , a, b = 1 , . . . , n, for which Φ is diagonalisable with distinct eigenvalues with exactly n − is a differen-tial ideal. Let’s suppose that we have differential ideal at the first step, i.e. ˜Σ = Sp { φ cV ∧ φ cH : c = 1 , ..., n } generates a differential ideal. Without loss of generality, let us also assume thatthe only non-integrable co-distribution is Sp { φ bV , φ bH } for some b ∈ { , , ..., n } and the other n − Sp { φ aV , φ aH : a = b } , are integrable. Then we have: dω b = d ( φ bV ∧ φ bH )= ξ bb ∧ ω b + ξ ba ∧ ω a , a = b and ξ ba = 0 for some a = b,dω a = d ( φ aV ∧ φ aH ) = ξ aa ∧ ω a (no sum) a = b where with A aV,Hcd := τ aV,Hcd − τ aV,Hdc , ξ aa = A aVac φ cV + A aHac φ cH , ξ bb = A bVbc φ cV + A bHbc φ cH , ξ ba = τ bVaa φ bV + τ bHaa φ bH . (53)We are now looking for 2-forms ω ∈ ˜Σ , i.e. ω = r c ω c , c = 1 , ..., n with dω = 0 and all the r ’snon-zero for non-degenerate solutions. We have dω = X a ( dr a + r b ξ ba + r a ξ aa ) ∧ φ aV ∧ φ aH + ( dr b + r b ξ bb ) ∧ φ bV ∧ φ bH , a = b. Putting dω = 0, we get a system of Pfaffian equations: dr b + r b ξ bb = − P b φ bV − Q b φ bH , b is fixed, (54) dr a + r b ξ ba + r a ξ aa = − P a φ aV − Q a φ aH a = b (no sum on a ) , (55)where P a ’s, Q a ’s, P b and Q b are arbitrary functions on E .Following the EDS procedure, we extend E to a new manifold N with coordinates ( t, x c , u c , r c , P c , Q c )and now the problem is to find the integrable distributions on N with σ c = 0 where σ b := dr b + r b ξ bb + P b φ bV + Q b φ bH (56) σ a := dr a + r b ξ ba + r a ξ aa + P a φ aV + Q a φ aH a = b (no sum on a ) (57)20ontinuing the EDS process, set π Vc := dP c and π Hc := dQ c , c = 1 , ..., n . (The V and H superscripts here do not indicate that the forms are vertical or horizontal.) Using this a co-frame on N is ( dt, φ cV , φ cH , σ c , π Vc , π Hc ) for c = 1 , ..., n . So the next step is to calculate dσ c modulo the ideal h σ c i .Taking the exterior derivative of (57) gives dσ a = dr b ∧ ξ ba + r b dξ ba + dr a ∧ ξ aa + r a dξ aa + π Va ∧ φ aV + P a dφ aV + π Ha ∧ φ aH + Q a dφ aH ≡ ( − r b ξ bb − P b φ bV − Q b φ bH ) ∧ ξ ba + r b dξ ba (58)+ ( − r b ξ ba − r a ξ aa − P a φ aV − Q a φ aH ) ∧ ξ aa + r a dξ aa + π Va ∧ φ aV + P a dφ aV + π Ha ∧ φ aH + Q a dφ aH (mod σ c )(no sum on a).The next step is to see what terms in dσ a can be absorbed into π Va and π Ha . Given that in each dσ a , any term that can be written as β ∧ φ aV or β ∧ φ aH can be absorbed into terms π Va ∧ φ aV and π Ha ∧ φ aH respectively. After this absorption these terms are denoted as ˜ π Va ∧ φ aV and ˜ π Ha ∧ φ aH and the remainder that can’t be absorbed represents the ‘torsion’ of the system.Working through (58), it can be seen from (21) and the integrability of eigen co-distributions Sp { φ aV , φ aH , a = b } that terms dξ aa , dφ aV , and dφ aH give no torsion and terms that contributeto the torsion are (remember that b is fixed) T a := (cid:16) r b ( ξ aa − ξ bb ) − P b φ bV − Q b φ bH (cid:17) ∧ ξ ba + r b dξ ba (no sum on a) . (59)By looking at (59), it can be seen that for those a = b where ξ ba = 0 or equivalently τ bVaa = 0, thetorsion T a vanishes without any extra conditions. It then follows that dσ a ≡ ˜ π Va ∧ φ aV + ˜ π Ha ∧ φ aH (mod σ ) (no sum) . However, the eigen co-distribution D ⊥ b = Sp { φ bV , φ bH } is non-integrable by assumption, so thereexists at least one ξ ba = 0 , a = b ; and each such a corresponds to one T a . So we split the probleminto two subcases:i) There is only one fixed a such that ξ ba = 0 or equivalently τ bVaa = 0.ii) There is more than one fixed a such that ξ ba = 0, say there are a i = b, i = 1 , , ... such that τ bVa i a i = 0.Considering a = b with ξ ba = 0, and remembering that b is fixed, computing dξ ba we get: dξ ba = dτ bVaa ∧ φ bV + τ bVaa dφ bV + dτ bHaa ∧ φ bH + τ bHaa dφ bH = (Γ( τ bVaa ) dt + X Vc ( τ bVaa ) φ cV + X Hc ( τ bVaa ) φ cH ) ∧ φ bV + τ bVaa ( − λ b dt ∧ φ bH + τ bHcb φ bV ∧ φ cH + τ bVcb φ bV ∧ φ cV − φ bV ( R ( X Hb , X Hc )) φ bH ∧ φ cH )+ (Γ( τ bHaa ) dt + X Vc ( τ bHaa ) φ cV + X Hc ( τ bHaa ) φ cH ) ∧ φ bH + τ bHaa ( dt ∧ φ bV + τ bHcb φ bH ∧ φ cH − τ bVbc φ bV ∧ φ cH ) . T a ≡ ( r b ( X Vb ( τ bHaa ) − X Hb ( τ bVaa ) + τ bHaa A aVab − τ bVaa A aHab + τ bVaa τ bHbb − τ bHaa τ bVbb ) − P b τ bHaa + Q b τ bVaa ) φ bV ∧ φ bH + (Γ( τ bVaa ) + τ bHaa ) dt ∧ φ bV + (Γ( τ bHaa ) − λ b τ bVaa ) dt ∧ φ bH + ( X Vc ( τ bVaa ) + τ bVaa A aVac + τ bVaa τ bVcb ) φ cV ∧ φ bV + ( X Hc ( τ bVaa ) + τ bVaa A aHac + τ bVaa ( τ bHcb − τ bHbc ) + τ bHaa τ bVbc ) φ cH ∧ φ bV + ( X Vc ( τ bHaa ) + τ bVaa A aVac + τ bHaa ( τ bVcb − τ bVbc ) + τ bVaa τ bHbc ) φ cV ∧ φ bH + ( X Hc ( τ bHaa ) + τ bHaa A aHac + τ bHaa τ bHcb + τ bVaa φ bV ( R ( X Hb , X Hc ))) φ cH ∧ φ bH + ( τ bVaa τ bHcc − τ bHaa τ bVcc ) φ cV ∧ φ cH , c = a = b (sum on c ) (mod φ aV , φ aH ) . Using Jacobi identities (31a), (31b), (34a), (35a), (34b) and (35b) respectively, we have that thecoefficients of dt ∧ φ bV , dt ∧ φ bH , φ cV ∧ φ bV , φ cH ∧ φ bV , φ cV ∧ φ bH and φ cH ∧ φ bH in T a vanish.Therefore the torsion is now T a ≡ ( r b ( X Vb ( τ bHaa ) − X Hb ( τ bVaa ) + τ bHaa A aVab − τ bVaa A aHab + τ bVaa τ bHbb − τ bHaa τ bVbb ) − P b τ bHaa + Q b τ bVaa ) φ bV ∧ φ bH (60)+ ( τ bVaa τ bHcc − τ bHaa τ bVcc ) φ cV ∧ φ cH , c = a = b (sum on c ) (mod φ aV , φ aH )The torsion must be zero for the existence of solutions, and if it is then, for each a = b with ξ ba = 0, dσ a ≡ ˜ π Va ∧ φ aV + ˜ π Ha ∧ φ aH (mod σ ) . We now examine the conditions for torsion to vanish. By looking at (60), we get: • If there is only one a = b such that ξ ba = 0, i.e. case i), then this remaining T a vanishes ifand only if Q b = P b B b + r b C, where B b = τ bHaa τ bVaa (61)and C = 1 τ bVaa ( X Hb ( τ bVaa ) − X Vb ( τ bHaa ) − τ bHaa A aVab + τ bVaa A aHab − τ bVaa τ bHbb + τ bHaa τ bVbb ) • If there exist at least two a i ’s such that ξ ba i = 0 , a i = b , i.e. case ii), then the conditionsfor all torsion to vanish are (61) and with B b = τ bHa i a i τ bVa i a i , (62) C = C a i = 1 τ bVa i a i ( X Hb ( τ bVa i a i ) − X Vb ( τ bHa i a i ) − τ bHa i a i A a i Va i b + τ bVa i a i A a i Ha i b − τ bVa i a i τ bHbb + τ bHa i a i τ bVbb ) , (63)for all such a i . 22n either case i) or ii) the consequence of the choice for Q b in condition (61) affects σ b in (56)and hence dσ b . Hence by substituting Q b into dσ b we get dσ b = dr b ∧ ξ bb + r b dξ bb + dP b ∧ φ bV + P b dφ bV + d ( P b B b + r b C ) ∧ φ bH + ( P b B b + r b C ) dφ bH ≡ ( − r b ξ bb − P b φ bV − ( r b C + P b B b ) φ bH ) ∧ ξ bb + r b dξ bb (64)+ dP b ∧ φ bV + P b dφ bV + dP b ∧ B b φ bH + P b d ( B b φ bH )+ C ( − r b ξ bb − P b φ bV − ( r b C + P b B b ) φ bH ) ∧ φ bH + r b d ( Cφ bH ) (mod σ ) . Simplifying (64) we get: dσ b ≡ ( π Vb + P b ( ξ bb + Cφ bH )) ∧ ( φ bV + B b φ bH ) (65)+ r b d ( ξ bb + Cφ bH ) + P b d ( φ bV + B b φ bH ) (mod σ )At this point the problem breaks down into two further subcases:1. If d ( φ bV + B b φ bH ) = κ ∧ ( φ bV + B b φ bH ), for some 1-form κ , then the condition for theexistence of non-degenerate solutions is that d ( ξ bb + Cφ bH ) = β ∧ ( φ bV + B b φ bH ) , for some 1-form β (66)and then dσ b ≡ ˜ π Vb ∧ ( φ bV + B b φ bH ) (mod σ ) .
2. If d ( φ bV + B b φ bH ) = κ ∧ ( φ bV + B b φ bH ), then in order to remove torsion we require r b d ( ξ bb + Cφ bH ) + P b d ( φ bV + B b φ bH ) ≡ φ bV + B b φ bH ). This results in an equationrelating P b to r b which would fix P b as a function of r b , thus we will have lost flexibilityin π Vb = d ( P b ) to absorb any terms. So in this situation, the problem reduces to finding asolution for P b in term of r b to the equation( dP b + P b ( ξ bb + Cφ bH )) ∧ ( φ bV + B b φ bH ) + r b d ( ξ bb + Cφ bH ) + P b d ( φ bV + B b φ bH ) = 0 . (67)Thus if there exists a function P b in term of r b satisfying equation (67), then we have dσ b ≡ σ ) . Let assume that we are in subcase 1 . so that there exists an integrable direction α b = φ bV + B b φ bH and so assume that d ( ξ bb + Cφ bH ) = β ∧ ( φ bV + B b φ bH ), since otherwise there would be no non-degenerate solution. Then we move onto the calculation of the freedom in the solution to theinverse problem for this case, we have dσ b ≡ ˜ π Vb ∧ ( φ bV + B b φ bH ) , (68) dσ a ≡ ˜ π Va ∧ φ aV + ˜ π Ha ∧ φ aH . (69)We change the basis { φ kV , φ kH } to the basis { γ kV , γ kH } using γ V,H = φ V,H + φ V,H + ... + φ nV,H ,γ cV,H = φ V,H − φ cV,H , c = 2 , ..., n. We then get the optimal tableau: 23Π = γ V γ H γ V γ H ... γ bV γ bH ... γ nV γ nH σ ˜ π V ˜ π H ˜ π V ˜ π H ... ˜ π V ˜ π H ... ˜ π V ˜ π H σ ˜ π V ˜ π H − ˜ π V − ˜ π H ... 0 0 ... 0 0... ... ... ... ... ... ... ... ... ... ... σ b ˜ π Vb B b ˜ π Vb − ˜ π Vb − B b ˜ π Vb ... 0 0... ... ... ... ... ... ... ... ... ... ... σ n ˜ π Vn ˜ π Hn − ˜ π Vn − ˜ π Hn This tableau gives Cartan characters: s = n , s = n − s i = 0 for i ≥ t be the number of ways that ˜ π Va , ˜ π Ha and ˜ π Vb can be altered such that (68) and (69) are unchanged. It can be seen that if we write:¯ π Va = ˜ π Va + f a φ aV + f a φ aH , ¯ π Ha = ˜ π Ha + f a φ aH + f a φ aV , ¯ π Vb = ˜ π Vb + f b ( φ bV + B b φ bH ) , then (68) and (69) would be unchanged if we replace ˜ π V,Ha by ¯ π V,Ha and ˜ π Vb by ¯ π Vb . Thus for each a = b we have three degrees of freedom in adding terms to ˜ π Va and ˜ π Ha , giving 3( n −
1) degreesof freedom for all ˜ π V,Ha . We have only 1 degree of freedom in adding terms to π Vb . Therefore inthis case, t = 3( n −
1) + 1 = 3 n −
2, which equal to s + 2 s as required for involution. So thesolution depends on n − α b , and assume that there issolution for equation (67), so we have dσ b ≡ , (70) dσ a ≡ ˜ π Va ∧ φ aV + ˜ π Ha ∧ φ aH . (71)The tableau corresponding with this system is˜Π = γ V γ H γ V γ H ... γ bV γ bH ... γ nV γ nH σ ˜ π V ˜ π H ˜ π V ˜ π H ... ˜ π V ˜ π H ... ˜ π V ˜ π H σ ˜ π V ˜ π H − ˜ π V − ˜ π H ... 0 0 ... 0 0... ... ... ... ... ... ... ... ... ... ... σ b σ n ˜ π Vn ˜ π Hn − ˜ π Vn − ˜ π Hn This tableau gives Cartan characters: s = n − s = n − s i = 0 for i ≥ π Va and ˜ π Ha can be altered such that (70) and (71) is unchanged. It can be seen that if we write¯ π Va = ˜ π Va + f a φ aV + f a φ aH , ¯ π Ha = ˜ π Ha + f a φ aH + f a φ aV π V,Ha by ¯ π V,Ha . Thus for each a = b we havethree degrees of freedom in adding terms to ˜ π Va and ˜ π Ha , giving 3( n −
1) degrees of freedom forall ˜ π V,Ha . Therefore in this case, t = 3( n − s + 2 s as required for involution.So the solution depends on n − Theorem 4.1.
In the case where Φ is diagonalisable with distinct eigenvalues and exactly onenon-integrable co-distribution there are three possibilities. • If h ˜Σ i is not a differential ideal, then there is no non-degenerate solution. • Suppose h ˜Σ i is a differential ideal, and there is an integrable direction in the non-integrableco-distribution. If (66) holds, then the solution depends on n − arbitrary function of variables. If (66) does not hold, then there is no solution. • Suppose h ˜Σ i is a differential ideal, and there is no integrable direction in the non-integrableco-distribution. If (67) admits a solution, then the solution depends on n − arbitraryfunction of variables. If (67) does not admit a solution, then there is no solution. In this section, we shall provide a number of examples to illustrate our results in previoussection. These examples cover most of the subcases of one non-integrable eigen co-distributioncase. Besides, we also give two examples (example 5 and 6) of the case n = 3 with two non-integrable co-distributions. We note that while example 5 may be considered as an extension ofDouglas’s case IIa2, example 6 may be of Douglas’ case IIIa. Example 1.
This is an example of non-existence for n = 2 because there is not a differentialideal at step 1, see theorem 3.12. This is in Douglas’ case IIIb. Consider the system¨ x = ˙ y, ¨ y = y (72)on an appropriate domain. This example was considered by Prince [17] who showed by directcalculation that no non-degenerate solution exists. Φ is given by Φ = (cid:18) − (cid:19) . The eigenvalues and corresponding eigenvectors are:0 and X := ( a, , − X := (0 , b )for some parameters a, b . We calculate ˆ ∇ Γ X V and ˆ ∇ Γ X V to find τ i Γ j , i, j ∈ { , } . In this case,by choosing a = 1 we get ˆ ∇ Γ X V = 0, i.e. τ = 0 = τ , but there is no b = 0 such thatˆ ∇ Γ X V = 0, in particular we have τ = − b and τ = Γ( b ) b . This immediately implies Σ is25ot a differential ideal as τ = 0 by corollary 3.8. The functions τ kVij and τ kHij are also easy tocompute and apart from other τ kV,Hij we have τ V = 0 , τ V = 0 . So this is the case where Φ is diagonalisable with distinct eigenvalues and the eigen co-distribution Sp { φ V , φ H } is integrable and Sp { φ V , φ H } is non-integrable since τ = 0. Therefore thereis no non-degenerate solution of the inverse problem since h ˜Σ i is not a differential ideal by thetheorem 3.12. Example 2.
This is another non-existence example, this time for n = 4. The example is insubcase i) but fails the condition of the further subcase 1. Consider the system¨ x = x, ¨ y = 0 , ¨ z = ˙ y/ ˙ z, ¨ w = ˙ w (73)on an appropriate domain. We find Φ = − − ˙ y z y z
00 0 0 − . In this case it is possible to choose the eigenvectors X a such that ˆ ∇ Γ X Va = 0 and so theeigenvalues and corresponding scaled eigenvectors are − X = (1 , , , , X = (0 , y, ˙ z, , y z and X = (0 , , √ ˙ z , , −
14 and X = (0 , , , √ ˙ w ) . The non-zero τ aVbc and τ aHbc are τ V = 3 , τ V = − z √ ˙ z, τ H = 3 ˙ y √ ˙ z , τ V = − ,τ V = 1 , τ V = −
12 ˙ z √ ˙ z , τ H = − y z √ ˙ z , τ V = 12 √ ˙ w , τ H = 14 √ ˙ w . These results show we are in the case where Φ is diagonalisable with distinct eigenvaluesand eigen co-distributions Sp { φ V , φ H } , Sp { φ V , φ H } and Sp { φ V , φ H } are integrable and Sp { φ V , φ H } is non-integrable and h ˜Σ i is a differential ideal. In particular, we are in thesubcase i) where there is only ξ = 0 corresponding to τ V = 0. We also find that the condition X V ( B ) = B τ V − τ H , where X V = 3 ˙ y ∂∂ ˙ y + ˙ z ∂∂ ˙ z and B = τ H τ V = − y z is satisfied for an integrable direction α = φ V + B φ H in the non-integrable co-distribution. The existence of a non-degenerate solutiondepends on whether or not, d ( ξ + Cφ H ) = κ ∧ α . C from (63), we get C = C = 1 τ V ( X H ( τ V ) − X V ( τ H ) − τ H A V + τ V A H − τ V τ H + τ H τ V )= 3 ˙ y z √ ˙ z ( ˙ z − dξ = d (2 φ V ) = 0. Therefore there is no regular solution since d ( Cφ H ) = κ ∧ α . Example 3.
This example is from Aldridge et al [2] which was modified from one of Douglas’examples in [9]. However, in their calculation only the differential ideal step was applied to findthe differential ideal. Then they had to use the differential conditions of the original Helmholtzconditions to determine the existence of the solution. By following the scheme given at theend of the previous section we quickly conclude that the solution of this system depends ontwo functions of 2 variables each. This example is the candidate for our case i) where only one τ bVaa = 0 for a = b .Consider the system ¨ x = − x, ¨ y = y − (1 + ˙ y + ˙ z ) , ¨ z = 0 (74)on an appropriate domain. Denoting the derivatives by u, v, w , we find Φ = 1 y y w ) − vw . Eigenvalues and corresponding eigenvectors X a chosen so that ˆ ∇ Γ X Va = 0 are λ = 0 and X = (0 , vw, w ) ,λ = 1 and X = (1 , , ,λ = 2 y − (1 + w ) and X = (0 , y, . The only non-zero functions τ aVbc and τ aHbc are τ V = vy , τ V = 2 w, τ V = w, τ H = − w y . These results show that our system is in the case that Φ is diagonalisable with distinct eigenvaluesand the eigen co-distributions Sp { φ V , φ H } and Sp { φ V , φ H } are integrable and the third oneis not and h ˜Σ i is a differential ideal. In particular, this is in the subcase i) where there is onlyone τ V = 0. Checking the condition for an integrable direction α = φ V + B φ H we get X (cid:18) τ H τ V (cid:19) = τ H τ V τ V − τ H , where X V = vw ∂∂v + (1 + w ) ∂∂w , holds. Now whether or not the system has solutions dependson the remaining condition: d ( ξ + Cφ H ) = κ ∧ α . Using the formula for C from (63), we get C = 0 and furthermore we have dξ = 0. Thereforethe solution of the system depends on 2 functions of 2 variables.27 xample 4. This example is in case ii) and the multiplier depends on two arbitrary functionseach of two variables and one function of one variable.Consider the system ¨ x = z, ¨ y = x ˙ x + z ˙ z, ¨ z = x (75)on an appropriate domain. Again denoting the derivatives by u, v, w , we find Φ = − − u − w − . In this case it is possible to choose the eigenvectors X a such that ˆ ∇ Γ X Va = 0 and so theeigenvalues and chosen corresponding eigenvectors are:0 and (0 , , , − , u − w, , − , u + w, . The τ aVbc and τ aHbc are zero except for τ V = − , τ V = 4 . These results show that our system is in the case that Φ is diagonalisable with distinct eigenvaluesand eigen co-distribution { φ V , φ H } is non-integrable and Sp { φ V , φ H } and Sp { φ V , φ H } areintegrable and h ˜Σ i is a differential ideal. We are in subcase ii). In addition, the ratio condition(62), τ H τ V = 0 = τ H τ V and the condition (63), C = 0 = C are satisfied. We can conclude that there is an integrabledirection inside the non-integrable co-distribution, which is just φ V . Furthermore we have ξ = 0 and so d ( ξ + Cφ H ) = 0. Therefore the conclusion is the solution of the system dependsupon 2 arbitrary functions of 2 variables (plus one function of one variable).In order to identify the structure of the r ’s, we return to equation (54) and (55). Since in thiscase we know that Sp { φ V , φ H } is non-integrable and the other two are integrable and φ V isintegrable direction, we have dr + r ξ = − P φ V − Q φ H ,dr + r ξ + r ξ = − P φ V − Q φ H ,dr + r ξ + r ξ = − P φ V − Q φ H , where P , Q , P , Q , P and Q are arbitrary functions. As we know Q = P B + r C = 0, andcomputing ξ ij ’s using (53) we get ξ = ξ = ξ = 0, ξ = − φ V and ξ = 4 φ V . So we have dr = − P φ V , (76a) dr − r φ V = − P φ V − Q φ H , (76b) dr + 4 r φ V = − P φ V − Q φ H . (76c)28quation (76a) implies r = G ( ζ ) where G is an arbitrary function of ζ with dζ ∈ Sp { φ V } .Since φ V = ( − d ( uw ) + 2 dv − xdx − zdz ), putting dζ = − d ( uw ) + 2 dv − xdx − zdz we have ζ = 2 v − uw − x − z . Now substituting r into equation (76b) we get dr = 2 G ( ζ ) dζ − P φ V − Q φ H . (77)Since Sp { φ V , φ H } is integrable, we know that there exist functions u and u such that Sp { du , du } = Sp { φ V , φ H } . As φ V = ( d ( w − u ) + ( z − x ) dt ) and φ H = ( d ( z − x ) + ( u − w ) dt ),putting du i = f i ( d ( w − u ) + ( z − x ) dt ) + g i ( d ( z − x ) + ( u − w ) dt ) , and for f = 2( w − u ), g = 2( z − x ), f = − z − x ( w − u ) +( z − x ) and g = w − u ( w − u ) +( z − x ) we get u = ( w − u ) + ( z − x ) u = tan − (cid:18) z − xw − u (cid:19) − t Now equation (77) gives r = 2 G ( ζ ) + ˜ r ( u , u ) , where G ′ = G and ˜ r is an arbitrary function of u and u .Similarly, Sp { φ V , φ H } is integrable with φ V = ( d ( u + w ) − ( x + z ) dt )and φ H = ( d ( x + z ) − ( u + w ) dt ) . We find u = ( u + w ) − ( x + z ) and u = tanh − (cid:16) x + zu + w (cid:17) − t with Sp { du , du } = Sp { φ V , φ H } . This and equation (76c) gives r = − G ( ζ ) + ˜ r ( u , u ) . To view the multiplier g ab , we translate the r ab into g ab using (14). We get, g = w r + 116 ( r + r ) ,g = r ,g = u r + 116 ( r + r ) ,g = g = − w r ,g = g = uw r −
116 ( r + r ) ,g = g = − u r . g = w G ( ζ ) + 116 (˜ r ( u , u ) + ˜ r ( u , u )) ,g = G ( ζ ) ,g = u G ( ζ ) + 116 (˜ r ( u , u ) + ˜ r ( u , u )) ,g = g = − w G ( ζ ) ,g = g = uw G ( ζ ) −
116 (˜ r ( u , u ) + ˜ r ( u , u )) ,g = g = − u G ( ζ ) . In summary, the most general Cartan two-form for this example is dθ L = 12 G ′ ( ζ ) dζ ∧ φ H + 132 (cid:2) (2 G ( ζ ) + ˜ r ( u , u )) du ∧ du + ( − G ( ζ ) + ˜ r ( u , u )) du ∧ du (cid:3) While this form is beguiling it is not the generic solution for this class.We now introduce two n = 3 examples with two non-integrable eigen co-distributions. Thesecorrespond to two cases of differential ideal step in which the system of ordinary differentialequations may possibly have non-degenerate solutions. In the first example, the differentialideal step finishes at step 1, i.e. ˜Σ generates a differential ideal which demonstrates that thenumber of non-integrable co-distributions is not equal to the number of steps in the differentialprocess as discussed following Theorem 3.12. The second example is for the case that h ˜Σ i is adifferential ideal. A full analysis for these cases will be in our forthcoming paper in which wepresent a classification for the n = 3 case. Example 5.
Consider the system ¨ x = x ˙ z, ¨ y = x, ¨ z = x (78)on an appropriate domain. Again denoting the derivatives by u, v, w , we find Φ = − w u − − . Eigenvalues and corresponding eigenvectors X a chosen so that ˆ ∇ Γ X Va = 0 are λ = p − u + w − w and X = ( − p − u + w + w, , ,λ = − p − u + w − w and X = ( p − u + w + w, , ,λ = 0 and X = (0 , , . The non-zero functions τ aVbc and τ aHbc are τ V = − τ V = √− u + w − w u − w ) , τ H = − τ H = x u − w ) , V = − τ V = 3 √− u + w + w u − w ) , τ H = − τ H = − x u − w ) ,τ V = − τ V = 3 √− u + w − w u − w ) , τ H = − τ H = x u − w ) ,τ V = − τ V = √− u + w + w u − w ) , τ H = − τ H = − x u − w ) . These results show that our system is in the case that Φ is diagonalisable with distinct eigen-values and the two co-distributions Sp { φ V , φ H } and Sp { φ V , φ H } are non-integrable but thedifferential ideal step finishes at step 1, that is ˜Σ generates a differential ideal. Further exam-ination shows that the solution depends on one arbitrary function of two unknowns as we willshow in the future paper. Example 6.
Consider the system ¨ x = zt, ¨ y = 0 , ¨ z = x (79)on an appropriate domain. Again denoting the derivatives by u, v, w , we find Φ = − t − . Eigenvalues and corresponding eigenvectors X a are λ = √ t and X = ( −√ t, , ,λ = −√ t and X = ( √ t, , ,λ = 0 and X = (0 , , . The structure functions τ ’s are zero except for τ = τ = − τ = − τ = 14 t . These results show that our system is in the case that Φ is diagonalisable with distinct eigenvaluesand the co-distributions Sp { φ V , φ H } and Sp { φ V , φ H } are non-integrable and the third oneis integrable and ˜Σ does not generate a differential ideal. Examining further we get, d ( ω − ω ) = 14 t dt ∧ ( ω − ω ) , and so h ˜Σ i is a differential ideal. The Cartan two-form for this example is ω = G √ t ( ω − ω ) + ˜ r ( u , u ) ω , where G is a constant, and ˜ r is an arbitrary function of two variables u := y − vt and u := v .31 Conclusion
We finish this paper with a new proposal for the classification scheme for the inverse problemin dimension n :A. Φ = λI n . This is equivalent to h Σ i being a differential ideal (see proposition 3.5).B. Φ is diagonalisable with distinct eigenvalues (real or complex). Further subcases will bedivided according to the integrability of the lifted two-dimensional eigen co-distributionsof Φ i.e. q co-distributions are non-integrable and n − q are integrable. According to ourtheorem 3.12, if up to and including h ˜Σ q i there is no differential ideal, then there is nonon-degenerate multiplier. Hence, for each q , the subcases to be considered are that adifferential ideal is generated at step 1, step 2,..., up to step q . Subsidiary to this is theconsideration of integrable directions within non-integrable eigen co-distributions.C. Φ is diagonalisable with repeated eigenvalues. Further subdivision according to integra-bility will be similar to case B above.D. Φ is not diagonalisable. Further subdivision depends on the integrability of normal formsof Φ .We invite the reader who has persevered thus far to compare this scheme to the geometrictranslation of Douglas’s scheme for n = 2 to be found in [7]. If that scheme was translatedinto EDS terms the differential ideal conditions would be followed by integrability conditions onthe eigen co-distributions. In the light of theorem 3.12 and our examples we maintain that theintegrability of the eigen co-distributions must be considered first. Acknowledgements
Thoan Do gratefully acknowledges receipt of a Vietnamese government MOET-VIED scholar-ship and scholarship support from La Trobe University, and the hospitality of the AustralianMathematical Sciences Institute. Both authors thank Willy Sarlet for useful discussions and hiscontinuing interest. We thanks two anonymous referees for their helpful suggestions.
References [1] J.E. Aldridge,
Aspects of the Inverse Problem in the Calculus of Variations
Ph.D. Thesis,La Trobe University, Australia (2003).[2] J.E. Aldridge, G.E. Prince, W. Sarlet and G. Thompson. An EDS approach to the inverseproblem in the calculus of variations,
J. Math. Phys. (2006) 103508 (22 pages).[3] I. Anderson and G. Thompson, The inverse problem of the calculus of variations for ordinarydifferential equations, Memoirs Amer. Math. Soc. No. 473 (1992).[4] R.L. Bryant, S.S. Chern, R.B. Gardner, H.L. Goldschmidt and P.A. Griffiths,
ExteriorDifferential Systems (Springer, Berlin) (1991).325] M. Crampin, G.E. Prince, W. Sarlet and G. Thompson, The inverse problem of the calculusof variations: separable systems,
Acta Appl. Math. (1999) 239–254.[6] M. Crampin, G.E. Prince and G. Thompson. A geometric version of the Helmholtz condi-tions in time dependent Lagrangian dynamics, J. Phys. A: Math. Gen. (1984) 1437–1447.[7] M. Crampin, W. Sarlet, E. Mart´ınez, G. B. Byrnes and G. E. Prince. Toward a geometricalunderstanding of Douglas’s solution of the inverse problem in the calculus of variations. Inverse Problems (1994), 245-260.[8] T. Do and G.E. Prince. An intrinsic and exterior form of the Bianchi identities. submitted (2014).[9] J. Douglas, Solution of the inverse problem of the calculus of variations, Trans. Am. Math.Soc. (1941) 71–128.[10] J. Grifone and Z. Muzsnay, Variational principles for second-order differential equations ,World Scientific (2000).[11] H. Helmholtz. ¨Uber der physikalische Bedeutung des Princips der kleinsten Wirkung,
J.Reine Angew. Math. , (1887), 137–166.[12] A. Hirsch. Die Existenzbedingungen des verallgemeinterten kinetischen Potentialen,
Math.Ann. (1898), 429–441.[13] M. Jerie and G.E. Prince, Jacobi fields and linear connections for arbitrary second orderODE’s, J. Geom. Phys. (2002) 351–370.[14] O. Krupkov´a and G.E. Prince, Second order ordinary differential equation in jet bundlesand the inverse problem of the calculus of variation in: Handbook of Global Analysis , editedby D. Krupka and D. Saunders, Elsevier 2008.[15] E. Massa and E. Pagani, Jet bundle geometry, dynamical connections, and the inverseproblem of Lagrangian mechanics,
Ann. Inst. Henri Poincar´e, Phys. Theor. (1994) 17–62.[16] G. Morandi, C. Ferrario, G. Lo Vecchio, G. Marmo and C. Rubano. The inverse problem inthe calculus of variations and the geometry of the tangent bundle. Phys. Rep. (1990),147–284.[17] G.E. Prince. The inverse problem in the calculus of variations and its ramifications in
Geo-metric Approaches to Differential Equations ed. P. Vassiliou. Lecture Notes of the AustralianMathematical Society. CUP (2000).[18] G.E. Prince and D.M. King. The inverse problem in the calculus of variations: nonexis-tence of Lagrangians pp Differential Geometric methods on Mechanics and Fieldtheory: Volume in Honour of Willy Sarlet , edited by F.Cantrijn and B. Langerock, Gent,Academia Press (2007).[19] W. Sarlet, The Helmholtz conditions revisited. A new approach to the inverse problem ofLagrangian dynamics,
J. Phys. A: Math. Gen. (1982) 1503–1517.3320] W. Sarlet, M. Crampin and E. Mart´ınez, The integrability conditions in the inverse problemof the calculus of variations for second-order ordinary differential equations, Acta Appl.Math. (1998) 233–273.[21] W. Sarlet, G. Thompson and G.E. Prince, The inverse problem of the calculus of variations:the use of geometrical calculus in Douglas’s analysis, Trans. Amer. Math. Soc. (2002)2897–2919.[22] W. Sarlet, A. Vandecasteele, F. Cantrijn and E. Mart´ınez, Derivations of forms along amap: the framework for time–dependent second–order equations,
Diff. Geom. Applic. (1995) 171–203.[23] N. Ya. Sonin, On the definition of maximal and minimal properties, Warsaw Univ.Izvestiya1–2