Integrability of local and nonlocal non-commutative fourth order quintic NLS equations
aa r X i v : . [ m a t h . A P ] N ov INTEGRABILITY OF LOCAL AND NONLOCALNON-COMMUTATIVE FOURTH ORDER QUINTIC NLSEQUATIONS
SIMON J.A. MALHAM
Abstract.
We prove integrability of a generalised non-commutative fourthorder quintic NLS equation. The proof is relatively succinct and rooted inthe linearisation method pioneered by Ch. P¨oppe. It is based on solving thecorresponding linearised partial differential system to generate an evolutionaryHankel operator for the ‘scattering data’. The time-evolutionary solution tothe non-commutative nonlinear partial differential system is then generatedby solving a linear Fredholm equation which corresponds to the Marchenkoequation. The integrability of reverse space-time and reverse time nonlocalversions, in the sense of Ablowitz and Musslimani [2], of the fourth order quin-tic NLS equation are proved contiguously by the approach adopted. Further,we implement a numerical integration scheme based on the analytical approachabove which involves solving the linearised partial differential system followedby numerically solving the linear Fredholm equation to generate the solutionat any given time. Introduction
We prove a generalised non-commutative fourth order quintic NLS equation isintegrable. Here ‘integrable’ means the equation can be linearised. Precisely thoughbriefly, given time-evolutionary solutions to the corresponding linearised equation,we can generate corresponding solutions to the original nonlinear equation at anygiven time by solving a linear integral Fredholm equation at that time. The Fred-holm equation in question corresponds to the Marchenko equation. This approachto finding solutions to classical integrable systems such as the sine–Gordon, Korte-weg de Vries, modified Korteweg de Vries, the whole associated Korteweg de Vrieshierarchy and also the nonlinear Schr¨odinger equation was pioneered by Ch. P¨oppein a sequence of papers, see P¨oppe [49–51], P¨oppe and Sattinger [52] and Bauhardtand P¨oppe [7]. Recently Doikou et al. [16, 17] extended P¨oppe’s approach. Firstthey demonstrated for the Korteweg de Vries and nonlinear Schr¨odinger equation,only P¨oppe’s celebrated kernel product rule is required for the approach to work,see Doikou et al. [17]. Second they demonstrated the approach, as considered byBauhardt and P¨oppe [7], is naturally non-commutative and extends to the non-commutative NLS and non-commutative modified Korteweg de Vries equations,see Doikou et al. [16]. They also show how the method also naturally extends tononlocal versions of these equations in the sense given in Ablowitz and Mussli-mani [2], i.e. where the nonlocality consists of reverse space-time or reverse timefields as factors in the nonlinear terms. The results herein are the natural extensionof these results in Doikou et al. [16] to the fourth order non-commutative case.
Date : 21st November 2020.
Let us explain P¨oppe’s approach in some more detail. Consider a nonlinearcomplex matrix-valued partial differential equation for g = g ( x ; t ) of the form: ∂ t g = d ( ∂ ) g + Ψ( g, ∂g, ∂ g, . . . ) , where ∂ = ∂ x . Here we suppose d = d ( ∂ ) is a constant coefficient polynomial in ∂ while Ψ is a precise non-commutative polynomial function of g and its partialderivatives up to an order two less than the degree of d . Suppose the linearisation ofthe partial differential equation above yields the linear partial differential equationfor the corresponding complex matrix-valued function p = p ( x ; t ) as follows: ∂ t p = d ( ∂ ) p. This assumes we a priori subsumed any homogeneous linear terms in Ψ into theterm d ( ∂ ) g . The quantity p = p ( x ; t ) represents the ‘scattering data’. The firststep in P¨oppe’s approach is to elevate the Marchenko equation to the operatorlevel. We construct a Hankel operator P = P ( x, t ) associated with the scatteringdata as follows. We assume P is a Hilbert–Schmidt operator with integral kernelgiven by p = p ( y + z + x ; t ) so that for any square-integrable function φ :( P φ )( y ; x, t ) := Z −∞ p ( y + z + x ; t ) φ ( z ) d z. Note P = P ( x, t ) satisfies the operator differential equation ∂ t P = d ( ∂ ) P . Wethen define an associated ‘data’ operator Q = Q ( x, t ) by Q := P † P , where P † is the adjoint operator to P . This precise assignment for Q does depend on theapplication at hand. For the applications herein we make the choice stated, guidedby Ablowitz et al. [4]. The crucial classical ingredient is the Marchenko equationand here, at the operator level, this has the form: P = G (id + Q ) . This is a linear Fredholm equation for the operator G = G ( x, t ). Hence to recap,the three key ingredients in the first step in P¨oppe’s approach are the: (i) Operatordifferential equation for P = P ( x, t ); (ii) Assignment for the auxiliary data operator Q = Q ( x, t ) and (iii) Linear Fredholm equation for G = G ( x, t ).The second step in P¨oppe’s approach, and the major underlying insight, is the‘kernel product rule’, in which the Hankel property of P plays a crucial role. Sup-pose F = F ( x, t ) is a Hilbert–Schmidt linear operator with kernel f ( y, z ; x, t ). Notewe assume f depends on the parameters x and t . For example recall from above P = P ( x, t ) is a Hankel operator with kernel p = p ( y + z + x ; t ). Let us denote by[ F ] the kernel of F , i.e. [ F ] = f . Now suppose F = F ( x, t ) and F ′ = F ′ ( x, t ) areHilbert–Schmidt operators with kernels continuous in x . In addition suppose H and H ′ are Hilbert–Schmidt Hankel operators with kernels continuously differentiablein x . Then the fundamental theorem of calculus implies (cid:2) F ∂ x ( HH ′ ) F ′ (cid:3) ( y, z ; x, t ) = [ F H ]( y, x, t )[ H ′ F ′ ](0 , z ; x, t ) . This is the crucial ‘kernel product rule’ composing the second step in P¨oppe’sapproach. More precisely, P¨oppe used the ‘trace’ form of this rule evaluated atwith y = z = 0. We prefer to delay this specialisation until the final step inthe procedure. The kernel product rule is the only property we use in Doikou etal. [16, 17] and herein.The third and final step is to compute ∂ t G − d ( ∂ ) G where from the linear Fred-holm equation above G = P U with U := (id + Q ) − . And we then apply the kernel ON-COMMUTATIVE QUINTIC NLS EQUATION 3 bracket operator [ · ]. The basic calculus property ∂U = − U ( ∂Q ) U initiates thegeneration of nonlinear terms. The goal is then to use only the kernel product ruleto establish a ‘closed form’ for the nonlinear terms generated. By a ‘closed form’we mean the terms generated represent a constant coefficient non-commutativepolynomial in [ G ], ∂ [ G ], ∂ [ G ] and so forth. Hence for example, if d ( ∂ ) = µ ∂ and µ is a pure imaginary constant parameter then in our main Theorem 3.1 weshow if P = P ( x, t ) satisfies ∂ t P = d ( ∂ ) P and Q = P † P , then [ G ] satisfies thenon-commutative nonlinear partial differential equation, ∂ t [ G ] − µ ∂ [ G ] = 2 µ (cid:16) (cid:0) ∂ [ G ] (cid:1) [ G ] † [ G ] + [ G ] (cid:0) ∂ [ G ] † (cid:1) [ G ] + 2[ G ][ G ] † (cid:0) ∂ [ G ] (cid:1) + (cid:0) ∂ [ G ] (cid:1)(cid:0) ∂ [ G ] † (cid:1) [ G ] + 3 (cid:0) ∂ [ G ] (cid:1) [ G ] † (cid:0) ∂ [ G ] (cid:1) + [ G ] (cid:0) ∂ [ G ] † (cid:1)(cid:0) ∂ [ G ] (cid:1) + 3[ G ][ G ] † [ G ][ G ] † [ G ] (cid:17) . In the above, P † is the operator adjoint to P while [ G ] † is the kernel functioncorresponding to the complex-conjugate transpose of the kernel function [ G ]. Inthe formulation above we have suppressed the parameter dependence of both thekernels [ G ] = [ G ]( y, z ; x, t ) and the kernels of the nonlinear terms—recall the kernelproduct rule above. Hence for example two applications of the kernel product ruleled to the first term on the right which in full should read4 µ (cid:0) ∂ [ G ]( y, x, t ) (cid:1) [ G ] † (0 , x, t )[ G ](0 , z ; x, t ) , and so forth for the other cubic terms. Four applications of the kernel productrule generated the quintic term whose left and right factors should have parameterdependencies matching those of the corresponding factors in the cubic term above,and whose three central factors should have the parameter dependence (0 , x, t ).By a standard convention we invoke, these dependencies are implied in the non-commutative equation above. We now emphasise that we can set y = z = 0throughout so all the terms have the parameter dependence (0 , x, t ). This gen-erates the non-commutative fourth order quintic nonlinear Schr¨odinger equation.Furthermore the solution to this equation is generated as follows. We solve the linear partial differential equation, namely ∂ t p = µ ∂ p . This can be achievedanalytically. The solution function p generates the kernel of the Hankel operator P = P ( x, t ). We set Q = P † P , this involves computing an integral whose integrandis a known function. We can then compute the solution to the non-commutativefourth order quintic NLS equation for [ G ] = [ G ]( y, z ; x, t ) shown above by solvingthe linear Fredholm equation P = G (id + Q ) for G . Hence the quintic NLS aboveis linearsiable and thus integrable in this sense.In Doikou et al. [17] with d ( ∂ ) = µ ∂ and µ a constant pure imaginary pa-rameter, we generated the solution to the nonlinear Schr¨odinger equation in thisway. With d ( ∂ ) = µ ∂ and µ a real parameter, and a slight modification of theprocedure above, we generated solutions to the Korteweg de Vries equation. Thenin Doikou et al. [16] we generalised this approach to the non-commutative settingand also generated solutions to the non-commutative modified Korteweg de Vriesin this way from d ( ∂ ) = µ ∂ . Note, with the solution procedure described above,the specific choices of d = d ( ∂ ) indicated, generate precise non-commutative poly-nomial functions Ψ representing the nonlinear terms. In preceding work, Beck etal. [9, 10] assume the kernels p = p ( y, z ; t ) and q = q ( y, z ; t ) associated with the op-erators P and Q satisfy a coupled pair of linear partial differential equations. They SIMON J.A. MALHAM show the kernel [ G ] = [ G ]( y, z ; t ) associated with the operator G solving the linearFredholm equation P = G (id + Q ) satisfies a Riccati partial differential equationwhich can be interpreted as a nonlocal nonlinear partial differential equation. Forexample Beck et al. [10] generate solutions to the following nonlocal Korteweg deVries equation for [ G ] = [ G ]( y, z ; t ) using this approach: ∂ t [ G ]( y, z ; t ) − ∂ y [ G ]( y, z ; t ) = Z R [ G ]( y, ξ ; t ) (cid:0) ∂ ξ [ G ]( ξ, z ; t ) (cid:1) d ξ. The nonlocal nonlinearity is the realisation at the kernel level of a linear operatorproduct, and the kernel product rule is not used. All the nonlinear flows in Beck et al. [9, 10] and Doikou et al. [16, 17] are shown to be Grassmannian flows. Indeedthe theory in Doikou et al. [16, Section 2.3] establishes the flow generated in ourmain Theorem 3.1 is also a Grassmannian flow.In actuality, in Doikou et al. [16] and herein we consider an inflated coupled linearsystem, one which includes the linear partial differential equation ∂ t P = d ( ∂ ) P for P = P ( x, t ), but more generally assigns Q := ˜ P P where ˜ P = ˜ P ( x, t ) is alinear operator analogous to P satisfying an associated linear partial differentialequation ∂ t ˜ P = ˜ d ( ∂ ) ˜ P . Here ˜ d = ˜ d ( ∂ ) is a constant coefficient polynomial in ∂ analogous to d ( ∂ ), of the same degree. We correspondingly assign ˜ Q := P ˜ P . Andfinally in addition to P = G (id + Q ) we now include the analogous linear Fredholmequation ˜ P = ˜ G (id + ˜ Q ). See Definition 2.7 for the full inflated linear system.The inflated system also naturally generates a Grassmannian flow; see Doikou etal. [16]. Naturally we must assume the complex matrix-valued kernels of P and˜ P are commensurate so that Q and ˜ Q make sense. Consequently with suitablerestrictions on ˜ d ( ∂ ) we can choose ˜ P = P † as above. Or for example, we can alsoconsistently choose ˜ P ( x, t ) = P T ( − x, − t ) where P T is the linear operator whosekernel is the transpose of the kernel for P . In this case ˜ G ( x, t ) = G T ( − x, − t ) andwe generate a non-commutative quintic equation like that above with [ G ] † replacedby G T ( − x, − t ) everywhere. Thus the corresponding reverse space-time nonlocalnon-commutative quintic equation, in the sense of Ablowitz and Musslimani [2], islinearisable.We also develop a numerical method for solving such integrable systems, first ex-plored in Doikou et al. [17] for the standard NLS and KdV equations. The numericalmethod applies to any integrable systems such as the non-commutative NLS andmKdV equations considered in Doikou et al. [16], as well as the non-commutativefourth order quintic equation above, or indeed, the more general non-commutativefourth order quintic equation which we establish is linearisable and integrable asour main result in Theorem 3.1. We can analytically solve the linearised partialdifferential system for the Hankel operator P = P ( x, t ) or indeed its correspondingkernel p = p ( x, t ), in Fourier space. This is because p satisfies the linear partialdifferential equation ∂ t = d ( ∂ ) p , where d ( ∂ ) is a constant coefficient polynomialin ∂ . Hence in principle we can evaluate p = p ( x, t ) at any given time t >
0, orin practice we can represent it to any degree of accuracy determined by the finitenumber of Fourier modes we choose to represent the Fourier series of p = p ( x, t ),on a truncated domain. We can then determine the kernel function q = q ( y, z ; x, t )corresponding to the associated data operator Q = Q ( x, t ) by evaluating Q = P † P ON-COMMUTATIVE QUINTIC NLS EQUATION 5 by computing, q ( y, z ; x, t ) = Z −∞ p † ( y + ζ + x ; t ) p ( ζ + z + x ; t ) d ζ. Note P † denotes the adjoint operator to P , while for the kernel function p † denotesthe complex conjugate transpose to the complex matrix-valued function p . Usingour representation for p we can approximate the integral on the right-hand side tocompute an approximation for q . To compute the operator G we need to solve theFredholm equation P = G (id+ Q ), i.e. if g = g ( y, z ; x, t ) is the kernel correspondingto G , we numerically solve the Fredholm integral equation p ( y + z + x ; t ) = g ( y, z ; x, t ) + Z −∞ g ( y, ζ ; x, t ) q ( ζ, z ; x, t d ζ. We achieve this by approximating the integral on the right-hand side by a Rie-mann sum and solving the resulting large linear algebraic system of equations.We then set y = z = 0 to determine g (0 , x, t ), the kernel correspondng to[ G ](0 , x, t ). The method works for any given initial data function p = p ( x )for which p ( x,
0) = p ( x ). In principle, given any initial data function g = g ( x )for which g (0 , x,
0) = g ( x ) we could compute p ( x,
0) via ‘scattering’ methods,however we do not implement this here. In general the overall numerical procedurewe implement is straightforward, and in practice, appears to be robust.In addition to the series of papers by P¨oppe mentioned above, the work hereinwas also motiviated by Ablowitz et al. [4], Dyson [18] and McKean [41]. We alsomention in this context Ercolani and McKean [19] and Mumford [43, p. 3.239]. TheMarchenko equation is central not only to P¨oppe’s approach, but also to that ofFokas and Ablowitz [22], Nijhoff et al. [45] and the Zakharov–Shabat scheme [65,66].Details of Fokas’ unified transform method can be found for example in Fokasand Pelloni [23]. Hankel operators have received a lot of recent attention, seeGrudsky and Rybkin [27, 28], Grellier and Gerard [26] and Blower and Newsham[12]. Nonlocal integrable systems have also received a lot of recent attention, seeAblowitz and Musslimani [2], Fokas [21], Grahovski, Mohammed and Susanto [25]and G¨urses and Pekcan [31–34]. A non-commutative fourth order quintic NLSequation corresponding to that above can be found in Nijhoff et al. [45, eq. B.4a] whoestablish integrability via the method pioneered by Fokas and Ablowitz [22]. IndeedNijhoff et al. [45, eq. B.5a] also present the non-commutative fifth order quinticNLS equation. For further early work on non-commutative integrable systems, seein addition, Manakov [39], Fordy and Kulisch [24] and Ablowitz et al. [3]. Formore recent work on the multi-component NLS, as well as its discretisation, seeAblowitz et al. [3], Degasperis and Lombardo [14] and Pelinovksy [48]. We remarkwe do not utilise a Lax pair Lυ = λυ and ∂ t υ = Dυ for the auxiliary function υ and spectral parameter λ , and require compatability. Here we use the linearisedevolution equation ∂ t υ = Dυ and require υ to have the Hankel property.General higher order NLS equations have recently received a lot of attention; seeKarpman [36], Karpman and Shagalov [37], Ben–Artzi et al. [11], Fibich et al. [20],Pausader [47], Boulenger and Lenzmann [13], Kwak [38], Oh and Wang [46] andPosukhovskyi and Stefanov [53]. More specifically though, in our main result inTheorem 3.1 we establish integrability for a more general system of equations thanthat shown for [ G ] above. The more general system also includes standard nonlinearSchr¨odinger dispersion and nonlinear terms, both with the scalar constant pure SIMON J.A. MALHAM imaginary factor µ , as well as the next order terms in the NLS hierarchy, namelystandard non-commutative mKdV third order dispersion and nonlinear terms, bothwith the scalar constant real factor µ . The commutative form of this more generalequation, involving the parameters µ , µ and µ have recently found applicationsas models for short pulse propagation in optical fibres as follows; see Kang etal. [35] and Agrawal [5]. The fundamental nonlinear Schr¨odinger equation itself,corresponding to µ = µ = 0 describes the propagation of picosecond pulses inan optical fibre. The case when µ = 0 (only) corresponds to the Hirota equationwhich describes the propagation of femtosecond soliton pulses in the mono-modeoptical fibres, see Demiray et al. [15], Mihalache et al. [42] and Nakkeeran [44].The case when with µ and µ pure imaginary, and µ real, with all three non-zero describes “ultrashort optical-pulse propagation in a long-distance, high-speedoptical fibre transmission system”, see again Kang et al. [35] or Guo et al. [30].Also see Wang et al. [63] who consider the case µ = 0 (only). Attosecond pulses inan optical fibre are described by a fifth order NLS equation. Indeed Kang et al. [35]consider and eighth order NLS equation as a model for ultrashort pulse propagation.The NLS hierarchy up to and including order eight can be found in Matveev andSmirnov [40] who consider multi-rogue wave solutions. For applications to pulsesin erbium-doped fibres modelled by the higher order NLS equations above coupledto a Maxwell–Bloch system, see Guan et al. [29], Guo et al. [30], Ren et al. [55],Wang et al. [61], Wang et al. [62] and Wang et al. [64].To summarise, what is new in this paper is we:(i) Give a direct proof, based on the Hankel operator approach of Ch. P¨oppe,that a generalised non-commutative fourth order quintic nonlinear Schr¨o-dinger equation is linearisable and thus integrable (Theorem 3.1);(ii) Prove reverse space-time and reverse time nonlocal versions, in the sense ofAblowitz and Musslimani [2], of the non-commutative fourth order quinticnonlinear Schr¨odinger equation, are also linearisable and thus integrable.These results are specialisations of the approach utilised to establish (i);(iii) Develop a numerical method to accurately evaluate the solution to suchintegrable systems at any given time, based on the analytical linearisationapproach in (i). The numerical method involves the analytical solution ofthe linearised partial differential system at the time given, computing anassociated data function and then numerically solving a linear Fredholmequation to generate the solution at that time. The method is straightfor-ward, and in practice appears to be robust.Our paper is organised as follows. In Section 2 we introduce the notation and keyconcepts and identities we need to prove our main result. The latter is presented andproved in Section 3. We demonstrate our numerical method based on the analyticalapproach above in Section 4. We include some insights on, and comments on futuredirections for, the work herein in the final Discussion Section 5.2. Preliminaries
We consider Hilbert–Schmidt integral operators which depend on both a spatialparameter x ∈ R and a time parameter t ∈ [0 , ∞ ). Throughout ∂ t represents thepartial derivative with respect to the time parameter t while ∂ = ∂ x representsthe partial derivative with respect to the spatial parameter x . Hilbert–Schmidtoperators are representable in terms of square-integrable kernels. Hence for a given ON-COMMUTATIVE QUINTIC NLS EQUATION 7
Hilbert–Schmidt operator F = F ( x, t ), there exists a square-integrable kernel f = f ( y, z ; x, t ) such that for any square-integrable function φ ,( F φ )( y ; x, t ) = Z −∞ f ( y, z ; x, t ) φ ( z ) d z. Definition 2.1 (Kernel bracket) . With reference to the operator F just above, weuse the kernel bracket notation [ F ] to denote the kernel of F :[ F ]( y, z ; x, t ) := f ( y, z ; x, t ) . We often drop the dependencies and simply write [ · ].Of critical importance throughout this paper is a class of integral operatorsknown as Hankel operators. We consider Hankel operators which depend on aparameter x as follows. Definition 2.2 (Hankel operator with parameter) . We say a given time-dependentHilbert–Schmidt operator H with corresponding square-integrable kernel h is Han-kel or additive with parameter x ∈ R if its action, for any square-integrable function φ , is given by ( Hφ )( y ; x, t ) := Z −∞ h ( y + z + x ; t ) φ ( z ) d z. Hankel operators of this form are the starting point for P¨oppe’s approach; seeP¨oppe [49, 50] and Doikou et al. [16, 17]. As mentioned in the introduction there isa crucial kernel product rule we rely on throughout. This is as follows. We includethe proof from Doikou et al. [16, 17] for completeness.
Lemma 2.3 (Kernel product rule) . Assume
H, H ′ are Hankel Hilbert–Schmidt op-erators with parameter x and F, F ′ are Hilbert–Schmidt operators. Assume furtherthat the corresponding kernels of F and F ′ are continuous and of H and H ′ arecontinuously differentiable. Then, the following kernel product rule holds, [ F ∂ ( HH ′ ) F ′ ]( y, z ; x ) = [ F H ]( y, x )[ H ′ F ′ ](0 , z ; x ) . Proof.
We use the fundamental theorem of calculus and Hankel properties of H and H ′ . Let f , h , h ′ and f ′ denote the integral kernels of F , H , H ′ and F ′ respectively.By direct computation [ F ∂ x ( HH ′ ) F ′ ]( y, z ; x ) equals Z R − f ( y, ξ ; x ) ∂ x (cid:0) h ( ξ + ξ + x ) h ′ ( ξ + ξ + x ) (cid:1) f ′ ( ξ , z ; x ) d ξ d ξ d ξ = Z R − f ( y, ξ ; x ) ∂ ξ (cid:0) h ( ξ + ξ + x ) h ′ ( ξ + ξ + x ) (cid:1) f ′ ( ξ , z ; x ) d ξ d ξ d ξ = Z R − f ( y, ξ ; x ) h ( ξ + x ) h ′ ( ξ + x ) f ′ ( ξ , z ; x ) d ξ d ξ = Z R − f ( y, ξ ; x ) h ( ξ + x ) d ξ · Z R − h ′ ( ξ + x ) f ′ ( ξ , z ; x ) d ξ = (cid:0) [ F H ]( y, x ) (cid:1)(cid:0) [ H ′ F ′ ](0 , z ; x ) (cid:1) , giving the result. (cid:3) SIMON J.A. MALHAM
This kernel bracket operator and product rule above originates from the work ofP¨oppe in [49–51] and Bauhardt and P¨oppe [7]. We record in the following lemmasome identities for W := (id+ F ) − which are useful later on. We assume F dependson a parameter. Similar results are derived by P¨oppe [49, 50]. Lemma 2.4 (Inverse operator identities) . Suppose the operator F depends on aparameter with respect to which we wish to compute derivatives. Further suppose W := (id + F ) − exists. Then the following identities hold: (i) id − W = W F = F W ; (ii) ∂W = − W ( ∂F ) W ; (iii) ∂W = − ( ∂W ) F − W ( ∂F ) = − ( ∂F ) W − F ( ∂W ) ; (iv) ∂ W = − ∂W )( ∂F ) W − W ( ∂ F ) W ; (v) ∂ W = − ∂ W )( ∂F ) W − ∂W )( ∂ F ) W − W ( ∂ F ) W ; (vi) ∂ W = − ∂ W )( ∂F ) W − ∂ W )( ∂ F ) W − ∂W )( ∂ F ) W − W ( ∂ F ) W .Proof. The first identity is straightforward. The others follow by successively dif-ferentiating id − W = W F and finally using W := (id + Q ) − in each form. (cid:3) We have the following corollary to Lemma 2.4, which can also be found in Doikou et al. [16].
Corollary 2.5.
Suppose we set F := Q with Q = ˜ P P and U := (id + Q ) − so U = (id + ˜ P P ) − . Assume U exists. Then U satisfies properties (i)–(vi) inLemma 2.4. Further suppose we set F := ˜ Q with ˜ Q = P ˜ P and V := (id + ˜ Q ) − .Assume V exists. Then similarly V satisfies properties (i)–(vi) in Lemma 2.4. Wenote P U − = V − P , so we have V P = P U . Similarly, we have U ˜ P = ˜ P V . The following key identities also prove useful throughout the proof of our mainresult in Section 3. To keep our statements succinct hereafter we use the following notation convention . For the kernel product rule introduced in Lemma 2.3, for theterms on the right we simply write [
F H ][ H ′ F ′ ], where it is understood the left factoris evaluated at ( y, x, t ) and the right factor is evaluated at (0 , z ; x, t ). When thereare three factors in the product, such as for the case [ F ∂ ( H H ′ ) F ′ F ∂ ( H H ′ ) F ′ ]where H , H ′ , H and H ′ are Hankel operators, we write[ F H ][ H ′ F ′ F H ][ H ′ F ′ ] , where it is understood the left and right factors are evaluated at ( y, x, t ) and(0 , z ; x, t ) respectively, while the middle factor is evaluated at (0 , x, t ). This isjust a direct consequence of successively applying the kernel product rule. Forhigher degree products of the form just above, again the left and right factorsare always evaluated at ( y, x, t ) and (0 , z ; x, t ) respectively, while all the middlefactors are evaluated at (0 , x, t ). Lemma 2.6 (Key identities) . Assume the Hilbert–Schmidt operators P , ˜ P , G , ˜ G and trace class operators Q := ˜ P P , ˜ Q := P ˜ P all depend on a parameter x andare related by P = G (id + Q ) and ˜ P = ˜ G (id + ˜ Q ) . Assume P and ˜ P are Hankeloperators as in Definition 2.2 and also the inverse operators U := (id + Q ) − and V := (id + ˜ Q ) − exist. Then we have the following identities: (i) ∂ [ P U ˜ P ] = [ G ][ ˜ G ] ; (ii) ∂ (cid:2) P U ( ∂ ˜ P ) (cid:3) = [ G ] ∂ [ ˜ G ] + [ G ][ ˜ G ][ P U ˜ P ] ; (iii) ∂ (cid:2) P U ( ∂ ˜ P ) (cid:3) = [ G ] ∂ [ ˜ G ]+2 (cid:0) [ G ][ ˜ G ] (cid:1) +[ G ] (cid:0) ∂ [ ˜ G ] (cid:1) [ P U ˜ P ] − [ G ][ ˜ G ] (cid:2) ∂ ( P U ) ˜ P (cid:3) ; ON-COMMUTATIVE QUINTIC NLS EQUATION 9 and by partial differentiation that, (iv) ∂ [ P U ˜ P ] = ∂ (cid:0) [ G ][ ˜ G ] (cid:1) ; (v) ∂ (cid:2) P U ( ∂ ˜ P ) (cid:3) = ∂ (cid:0) [ G ] ∂ [ ˜ G ] (cid:1) + (cid:0) [ G ][ ˜ G ] (cid:1) + ∂ (cid:0) [ G ][ ˜ G ] (cid:1) [ P U ˜ P ] Proof.
First, by differentiating the formula id − V = P U ˜ P partially with respectto x we see, (cid:2) ∂ ( P U ˜ P ) (cid:3) = − [ ∂V ] = (cid:2) V ∂ ( P ˜ P ) V (cid:3) = [ V P ][ ˜
P V ] = [ G ][ ˜ G ], giving (i).Second, for (ii), using the product rule and id − V = P U ˜ P we find: ∂ ( P U ( ∂ ˜ P )) = ∂ ( P U )( ∂ ˜ P ) + P U ( ∂ ˜ P )= ∂ ( P U )( ∂ ˜ P ) V + P U ( ∂ ˜ P ) V + ∂ ( P U )( ∂ ˜ P ) P U ˜ P + P U ( ∂ ˜ P ) P U ˜ P = ∂ ( V P )( ∂ ˜ P ) V + V P ( ∂ ˜ P ) V + ∂ ( P U ) ∂ ( ˜ P P ) U ˜ P + P U ∂ (cid:0) ( ∂ ˜ P ) P (cid:1) U ˜ P − ∂ ( P U ) ˜ P ( ∂P ) U ˜ P − P U ( ∂ ˜ P )( ∂P ) U ˜ P .
The last two terms on the right combine to become − ( ∂V )( ∂P ) ˜ P V since we know ∂ ( P U ˜ P ) = − ∂V and U ˜ P = V ˜ P . Substituting this result into the expression aboveand combining − ( ∂V )( ∂P ) U ˜ P = − ( ∂V )( ∂P ) ˜ P V with the first two terms on theright and then applying the kernel bracket operator we observe: (cid:2) ∂ ( P U ( ∂ ˜ P )) (cid:3) = [ V ∂ ( P ( ∂ ˜ P )) V ] + [( ∂V ) ∂ ( P ˜ P ) V ]+ [ ∂ ( P U ) ˜ P ][ P U ˜ P ] + [ P U ( ∂ ˜ P )][ P U ˜ P ]= [ V P ][( ∂ ˜ P ) V ] + [( ∂V ) P ][ ˜ P V ] + [ ∂ ( P U ˜ P )][ P U ˜ P ] . Note we have[( ∂ ˜ P ) V ] = [ ∂ ( ˜ P V )] − [ ˜ P ( ∂V )] = ∂ [ ˜ P V ] + [ ˜
P V ∂ ( P ˜ P ) V ] = ∂ [ ˜ P V ] + [ ˜
P V P ][ ˜
P V ] , and [( ∂V ) P ] = − [ V ∂ ( P ˜ P ) V P ] = − [ V P ][ ˜
P V P ]. Substituting these into the right-hand side, cancelling like terms and using (i) gives (ii).Third, we prove (iii) using a similar strategy to that we used for (ii), Using theproduct rule and id − V = P U ˜ P we find: ∂ (cid:0) P U ( ∂ ˜ P ) (cid:1) = ∂ ( P U )( ∂ ˜ P ) + P U ( ∂ ˜ P )= ∂ ( P U )( ∂ ˜ P ) V + P U ( ∂ ˜ P ) V + ∂ ( P U )( ∂ ˜ P ) P U ˜ P + P U ( ∂ ˜ P ) P U ˜ P = ∂ ( V P )( ∂ ˜ P ) V + V P ( ∂ ˜ P ) V + ∂ ( P U ) ∂ (cid:0) ( ∂ ˜ P ) P (cid:1) U ˜ P + P U ∂ (cid:0) ( ∂ ˜ P ) P (cid:1) U ˜ P − ∂ ( P U )( ∂ ˜ P )( ∂P ) U ˜ P − P U ( ∂ ˜ P )( ∂P ) U ˜ P = ( ∂V ) P ( ∂ ˜ P ) V + V ∂ (cid:0) P ( ∂ ˜ P ) (cid:1) V + ∂ ( P U ) ∂ (cid:0) ( ∂ ˜ P ) P (cid:1) U ˜ P + P U ∂ (cid:0) ( ∂ ˜ P ) P (cid:1) U ˜ P − ∂ (cid:0) P U ( ∂ ˜ P ) (cid:1) ( ∂P ) U ˜ P .
If we now apply the kernel bracket operator to the expression above, and combinethe third and fourth terms on the right, use that ∂V = − V ∂ ( P ˜ P ) V and use thekernel product rule, we obtain: ∂ (cid:2) P U ( ∂ ˜ P ) (cid:3) = − [ G ] (cid:2) ˜ P V P ( ∂ ˜ P ) V (cid:3) + [ G ] (cid:2) ( ∂ ˜ P ) V (cid:3) + (cid:0) ∂ (cid:2) P U ( ∂ ˜ P ) (cid:3)(cid:1) [ P U ˜ P ] − (cid:2) ∂ (cid:0) P U ( ∂ ˜ P ) (cid:1) ( ∂P ) U ˜ P (cid:3) . Note for the second factor of the second term on the right just above we have: (cid:2) ( ∂ ˜ P ) V (cid:3) = ∂ [ G ] − (cid:2) ( ∂ ˜ P )( ∂V ) (cid:3) − (cid:2) ˜ P ( ∂ V ) (cid:3) (a) = ∂ [ G ] + 2 (cid:2) ( ∂ ˜ P ) V ∂ ( P ˜ P ) V (cid:3) + 2 (cid:2) ˜ P ( ∂V ) ∂ ( P ˜ P ) V (cid:3) + (cid:2) ˜ P V ∂ ( P ˜ P ) V (cid:3) (b) = ∂ [ G ] + 2 (cid:2) ( ∂ ˜ P ) V P (cid:3) [ ˜ G ] + 2 (cid:2) ˜ P ( ∂V ) P (cid:3) [ ˜ G ] + 2 (cid:2) ˜ P V ∂ (cid:0) ( ∂P ) ˜ P (cid:1) V (cid:3) − (cid:2) ˜ P V ( ∂ P ) ˜ P V (cid:3) + (cid:2) ˜ P V P ( ∂ ˜ P ) V (cid:3) (c) = ∂ [ G ] + 2 ∂ (cid:2) ˜ P V P (cid:3) [ ˜ G ] − (cid:2) ˜ P V ( ∂ P ) ˜ P V (cid:3) + (cid:2) ˜ P V P ( ∂ ˜ P ) V (cid:3) (d) = ∂ [ G ] + 2[ ˜ G ][ G ][ ˜ G ] − (cid:2) ˜ P V ( ∂ P ) ˜ P V (cid:3) + (cid:2) ˜ P V P ( ∂ ˜ P ) V (cid:3) , where we used the following, we used the: (a) Identities (i) and (iv) from the inverseoperator Lemma 2.4 with W = V ; (b) Kernel bracket product rule, added andsubtracted the term [ ˜ P V ( ∂ P ) ˜ P V ] and separated the final term from the previousline as shown; (c) Kernel bracket product rule applied to the fourth term on the rightfrom the previous line, and combined the resulting term with the other terms withthe factor 2 shown; and (d) Identity [ ∂ ( ˜ P V P )] = − [ ∂U ] = [ U ∂ ( ˜
P P ) U ] = [ G ][ ˜ G ].We now insert this expression for [( ∂ ˜ P ) V ] into the second term on the right inthe expression for ∂ [ P U ( ∂ ˜ P )] just above, cancelling like terms, this gives ∂ (cid:2) P U ( ∂ ˜ P ) (cid:3) = [ G ] ∂ [ G ] + 2 (cid:0) [ G ][ ˜ G ] (cid:1) − [ G ] (cid:2) ˜ P V ( ∂ P ) ˜ P V (cid:3) + (cid:0) ∂ (cid:2) P U ( ∂ ˜ P ) (cid:3)(cid:1) [ P U ˜ P ] − (cid:2) ∂ (cid:0) P U ( ∂ ˜ P ) (cid:1) ( ∂P ) U ˜ P (cid:3) (e) = [ G ] ∂ [ G ] + 2 (cid:0) [ G ][ ˜ G ] (cid:1) + [ G ] (cid:0) ∂ [ G ] (cid:1) [ P U ˜ P ] + [ G ][ ˜ G ][ P U ˜ P ] − [ G ] (cid:2) ˜ P V ( ∂ P ) ˜ P V (cid:3) − (cid:2) ∂ (cid:0) P U ( ∂ ˜ P ) (cid:1) ( ∂P ) U ˜ P (cid:3) (f) = [ G ] ∂ [ G ] + 2 (cid:0) [ G ][ ˜ G ] (cid:1) + [ G ] (cid:0) ∂ [ G ] (cid:1) [ P U ˜ P ] − [ G ][ ˜ G ] (cid:2) P ( ∂U ) ˜ P (cid:3) − [ G ] (cid:2) ˜ P V ( ∂ P ) ˜ P V (cid:3) − (cid:2) ∂ (cid:0) P U ( ∂ ˜ P ) (cid:1) ( ∂P ) U ˜ P (cid:3) , where in (e) we used result (ii) and in (f) we used [ P U ˜ P ] = [ P U ∂ ( ˜
P P ) U ˜ P ] = − [ P ( ∂U ) ˜ P ]. Consider the two terms on the final line on the right-hand side justabove. Reversing the kernel product rule on the first term and using V ∂ ( P ˜ P ) V = − ∂V = ∂ ( P U ˜ P ) these two terms become − (cid:2) ∂ ( P U ˜ P )( ∂ P ) ˜ P V (cid:3) − (cid:2) ∂ (cid:0) P U ( ∂ ˜ P ) (cid:1) ( ∂P ) U ˜ P (cid:3) = − (cid:2) ∂ ( P U ) ˜ P ( ∂ P ) U ˜ P (cid:3) − (cid:2) P U ( ∂ ˜ P )( ∂ P ) U ˜ P (cid:3) − (cid:2) ∂ ( P U )( ∂ ˜ P )( ∂P ) U ˜ P (cid:3) − (cid:2) P U ( ∂ ˜ P )( ∂P ) U ˜ P (cid:3) = − (cid:2) ∂ ( P U ) ∂ (cid:0) ˜ P ( ∂P ) (cid:1) U ˜ P (cid:3) − (cid:2) P U ∂ (cid:0) ( ∂ ˜ P )( ∂P ) (cid:1) U ˜ P (cid:3) = − (cid:2) ∂ ( P U ) ˜ P (cid:3)(cid:2) ( ∂P ) U ˜ P (cid:3) − (cid:2) P U ( ∂ ˜ P ) (cid:3)(cid:2) ( ∂P ) U ˜ P (cid:3) = − [ G ][ ˜ G ] (cid:2) ( ∂P ) U ˜ P (cid:3) . Substituting this back into the last expression for ∂ [ P U ( ∂ ˜ P )] just above and com-bining the terms [( ∂P ) U ˜ P ] + [ P ( ∂U ) ˜ P ] = [ ∂ ( P U ) ˜ P ] gives result (iii).Fourth, results (iv) and (v) follow directly by respectively differentiating (i) and(ii) partially with respect to x , and using result (i) itself to help establish (v). (cid:3) We now outline the coupled linear operator system that underlies the non-commutative fourth order quintic NLS equation we consider herein. Solutions ofthis coupled linear system generate solutions to the target nonlinear local and non-local partial differential equations.
ON-COMMUTATIVE QUINTIC NLS EQUATION 11
Definition 2.7 (Linear operator system) . Suppose the Hilbert–Schmidt linear op-erators P , ˜ P , Q , ˜ Q , G and ˜ G satisfy the coupled linear system of equations: ∂ t P = µ ∂ P + µ ∂ P + µ ∂ P,Q = ˜
P P,P = G (id + Q ) , and ∂ t ˜ P = ˜ µ ∂ ˜ P + ˜ µ ∂ ˜ P + ˜ µ ∂ ˜ P , ˜ Q = P ˜ P , ˜ P = ˜ G (id + ˜ Q ) . where the constant parameters µ , µ , µ , ˜ µ , ˜ µ , ˜ µ ∈ C . Naturally we suppose thematrix kernels of P and ˜ P are commensurate so Q and ˜ Q are well-defined.We say the system of equations prescribing G and ˜ G in this definition is linear.This is because, to determine G and ˜ G , we must first solve the linear partial dif-ferential equations prescribing the flows of P and ˜ P . Then second we determine Q and ˜ Q directly from P and ˜ P without having to solve another equation. And finallythird, we determine G and ˜ G by solving the linear Fredholm equations shown.The parameters µ j for j = 2 , , µ j = ( − j − µ j , for j = 2 , ,
4. Suppose for some finite timeinterval we know both P and ˜ P are Hilbert–Schmidt operators whose kernels alsolie in Dom( ∂ ), and they are smooth in time. Here Dom( ∂ ) denotes the subset ofkernels which are four times continuously differentiable with respect to x , with allsquare-integrable. Note by the ideal property for Hilbert–Schmidt operators, theoperators Q and ˜ Q are trace-class and smooth in time on the finite time interval.Assume the Fredholm determinants det(id + Q (0)) and det(id + ˜ Q (0)) are non-zero;see the end of this section for more details on Fredholm determinants. Then thereexists a possibly shorter finite time interval on which the Fredholm determinantassociated with Q = Q ( t ) and ˜ Q = ˜ Q ( t ) are non-zero. Further, on that timeinterval, there exist unique solutions G and ˜ G to the linear Fredholm equations P = G (id + Q ) and ˜ P = ˜ G (id + ˜ Q ), respectively, whose kernels lie in Dom( ∂ ) andare smooth in time. These conclusions are established in Doikou et al. [17].Now suppose the complex matrix-valued functions p = p ( x, t ) and ˜ p = ˜ p ( x, t )satisfy the respective linear equations, ∂ t p = d ( ∂ ) p and ∂ t ˜ p = ˜ d ( ∂ )˜ p, where d ( ∂ ) := µ ∂ + µ ∂ + µ ∂ and ˜ d ( ∂ ) := ˜ µ ∂ + ˜ µ ∂ + ˜ µ ∂ , with p ( x,
0) = p ( x ) and ˜ p ( x,
0) = ˜ p ( x ) for all x ∈ R for given complex matrixvalued functions p and ˜ p . For w : R → R + , let L w denote the space of complexmatrix-valued functions f on R whose L -norm weighted by w is finite, i.e. k f k L w := Z R tr (cid:0) f † ( x ) f ( x ) (cid:1) w ( x ) d x < ∞ , where f † denotes the complex-conjugate transpose of f and ‘tr’ is the trace operator.Let W : R → R + denote the function W : x x . Further let H denotethe Sobolev space of complex matrix-valued functions who themselves, as well asderivatives ∂ to all orders of them, are square-integrable. Definition 2.8 (Dispersion property) . We say the constant coefficient polynomialoperator d = d ( ∂ ) satisfies the dispersion property if for all κ ∈ R : (cid:0) d (i κ ) (cid:1) ∗ = − d (i κ ) . Suppose d ( ∂ ) = µ ∂ + µ ∂ + µ ∂ satisfies the dispersion property. Thisplaces a restriction on the parameters µ , µ , µ ∈ C , in particular that µ and µ are pure imaginary parameters and µ is a real parameter. Then Doikou etal. [16, Lemma 3.1] establish if p ∈ H ∩ L W then p ∈ C ∞ (cid:0) [0 , ∞ ); H ∩ L W (cid:1) and thecorresponding Hankel operator P = P ( t ) is Hilbert–Schmidt valued. Analogousresults carry over to the operator ˜ P under the assumption ˜ d = ˜ d ( ∂ ) satisfies thedispersion property, with the corresponding restrictions imposed on ˜ µ , ˜ µ , ˜ µ ∈ C .We are now in a position to establish the well-posedness of solutions to thelinear operator system in Definition 2.7, under the dispersion property assumptionfor d = d ( ∂ ) and ˜ d = ˜ d ( ∂ ), by combining the conclusions of the previous twoparagraphs. More details on the following arguments can be found in Doikou atal. [17] and Doikou at al. [16]. Assume d = d ( ∂ ) and ˜ d = ˜ d ( ∂ ) satisfy the dispersionproperty and p ∈ H ∩ L W and ˜ p ∈ H ∩ L W . Hence Hankel operators P and ˜ P ofthe form given in Definition 2.2, respectively generated from p and ˜ p , are Hilbert–Schmidt operators. Naturally we assume the matricies p and ˜ p are commensurateso the matrix products ˜ p p and p ˜ p make sense. We further assume the trace-class operators Q := ˜ P P and ˜ Q := P ˜ P are such that det(id + Q ) = 0 anddet(id + ˜ Q ) = 0. Then we deduce the commensurate solutions p = p ( y + x, t ) and˜ p = ˜ p ( y + x, t ) to the respective linear partial differential equations ∂ t p = d ( ∂ ) p and ∂ t ˜ p = ˜ d ( ∂ )˜ p are such that p ∈ C ∞ (cid:0) [0 , ∞ ); H ∩ L W (cid:1) and ˜ p ∈ C ∞ (cid:0) [0 , ∞ ); H ∩ L W (cid:1) with p ( x,
0) = p ( x ) and ˜ p ( x,
0) = ˜ p ( x ) for all x ∈ R . The correrspondingrespective Hankel operators P = P ( x, t ) and ˜ P = ˜ P ( x, t ) are Hilbert–Schmidtoperators and smooth functions of x ∈ R and t ∈ [0 , ∞ ). The kernel function q = q ( y, z ; x, t ) corresponding to Q = Q ( x, t ) given by q ( y, z ; x, t ) = Z R − ˜ p ( y + ξ + x, t ) p ( ξ + z + x, t ) d ξ, generates a trace-class operator and is a smooth function of x and t , where R − :=( −∞ , q = ˜ q ( y, z ; x, t ) corresponding to ˜ Q = ˜ Q ( x, t ) isdefined similarly, with the positions of p and ˜ p exchanged, and possesses the sameproperties. Further there exists a T > t ∈ [0 , T ] we know: det(id + Q ( x, t )) = 0 and det(id + ˜ Q ( x, t )) = 0 and there exists a unique smooth function g satisfying the linear Fredholm equation given by, p ( y + z + x ; t ) = g ( y, z ; x, t ) + Z R − g ( y, ξ ; x, t ) q ( ξ, z ; x, t ) d ξ, as well as a unique smooth ˜ g satisfying an analogous linear Fredholm equation butwith p , q and g respectively replaced by ˜ p , ˜ q and ˜ g .Finally we briefly outline why the solution flow for G prescribed in Definition 2.7,or in fact for the full inflated system for G and ˜ G shown therein, is a FredholmGrassmannian flow. Details on Fredholm Grassmann manifolds can be found inPressley and Segal [54] or more recently in Abbondandolo and Majer [1] or Andru-chow and Larotonda [6]. Their connection to integrable systems can be found inSato [56,57] and Segal and Wilson [58]. For more explicit details on the connectionbetween Fredholm Grassmannian flows and the flow prescribed by the linear systemof equations in Definition 2.7, see Beck et al. [9, 10], Doikou et al. [16, Sec. 2.3] andDoikou et al. [17]. Suppose H is a given separable Hilbert space. The Fredholm ON-COMMUTATIVE QUINTIC NLS EQUATION 13
Grassmannian of all subspaces of H that are comparable in size to a given closedsubspace V ⊂ H is defined as follows; see for example Segal and Wilson [58]. Definition 2.9 (Fredholm Grassmannian) . Let H be a separable Hilbert spacewith a given decomposition H = V ⊕ V ⊥ , where V and V ⊥ are infinite dimensionalclosed subspaces. The Grassmannian Gr( H , V ) is the set of all subspaces W of H such that:(i) The orthogonal projection pr : W → V is a Fredholm operator, indeed it isa Hilbert–Schmidt perturbation of the identity; and(ii) The orthogonal projection pr : W → V ⊥ is a Hilbert–Schmidt operator.Since H is separable, any element in H has a representation on a countable basis,for example via the sequence of coefficients of the basis elements. Suppose we aregiven a set of independent sequences in H = V ⊕ V ⊥ which span V and we recordthem as columns in the infinite matrix W = (cid:18) id + QP (cid:19) . In other words, each column of id + Q ∈ V and each column of P ∈ V ⊥ . Assumealso that when we constructed id + Q we ensured it was a Fredholm operator on V with Q ∈ J ( V ; V ), where J ( V ; V ) is the class of Hilbert–Schmidt operators from V to V , equipped with the norm k Q k J := tr Q † Q where ‘tr’ is the trace operator.The space J ( V ; V ) denotes the space of trace-class operators, and for any Q ∈ J n , n = 1 ,
2, we can definedet n (id + Q ) := exp X k > n ( − k − k tr ( Q k ) ! . This is the Fredholm determinant when n = 1, and the regularised Fredholm de-terminant when n = 2, see Simon [59]. Naturally the operator id + Q is invertibleif and only if det n (id + Q ) = 0. We also assume we constructed P to ensure P ∈ J ( V ; V ⊥ ), the space of Hilbert–Schmidt operators from V to V ⊥ . Let W de-note the subspace of H represented by the span of the columns of W . Let V ∼ = V denote the canonical subspace of H with the representation V = (cid:18) id O (cid:19) , where O is the infinite matrix of zeros. The projections pr : W → V and pr : W → V ⊥ respectively generate W k = (cid:18) id + QO (cid:19) and W ⊥ = (cid:18) OP (cid:19) . This projection is achievable if and only if det (id + Q ) = 0. Assume this is thecase for the moment. We observe that the subspace of H represented by the spanof the columns of W k coincides with the subspace V , indeed, the transformation(id + Q ) − ∈ GL( V ) transforms W k to V . Under the same transformation therepresentation W for W becomes (cid:18) id G (cid:19) , where G = P (id + Q ) − . We observe that any subspace W that can be projectedonto V can be represented in this way and vice-versa. In this representation the operators G ∈ J ( V , V ⊥ ) parameterise all the subspaces W that can be projected onto V . If det (id + Q ) = 0 so the projection above is not possible, we need to choosea different representative coordinate chart/patch. This effectively corresponds tochoosing a subspace V ′ ∼ = V of H different to canonical subspace V indicatedabove such that the projection W → V ′ is an isomorphism. This is always possible;see Pressley and Segal [54, Prop. 7.1.6]. For further details on coordinate patchessee Beck et al. [9] and Doikou et al. [17], and for the analogous argument for theinflated system shown in Definition 2.7 see Doikou et al. [16]. We also discuss theimplications of id + Q becoming singular and the necessity for different coordinatepatches briefly in the Discussion Section 5.3. Non-commutative fourth order quintic NLS
We now assume ˜ µ = − µ , ˜ µ = µ and ˜ µ = − µ . For our main result below wealso assume d = d ( ∂ ) and ˜ d = ˜ d ( ∂ ) satisfy the dispersion property in Definition 2.8.Hence we assume the parameters µ , µ , µ ∈ C are such that µ , µ ∈ i R , the setof pure imaginary numbers, and µ ∈ R . We use the notation P † to denote theoperator adjoint to the Hilbert–Schmidt operator P . In other words, if the Hilbert–Schmidt operator P has the kernel p , then P † is the Hilbert–Schmidt operator whosekernel is the complex-conjugate transpose of p which we also denote by p † . Theorem 3.1 (Quintic kernel NLS equation) . Assume P , ˜ P , Q , ˜ Q , G and ˜ G satisfy the linear operator system in Definition 2.7 and all the assumptions outlinedin the last paragraph of Section 2 just above. Then for some T > , the integralkernel [ G ] = [ G ]( y, z ; x, t ) satisfies, for every t ∈ [0 , T ] , the matrix kernel equation: ( ∂ t − µ ∂ − µ ∂ − µ ∂ )[ G ]= 2 µ [ G ][ ˜ G ][ G ] + 3 µ (cid:16)(cid:0) ∂ [ G ] (cid:1) [ ˜ G ][ G ] + [ G ][ ˜ G ] (cid:0) ∂ [ G ] (cid:1)(cid:17) + 2 µ (cid:16) (cid:0) ∂ [ G ] (cid:1) [ ˜ G ][ G ] + [ G ] (cid:0) ∂ [ ˜ G ] (cid:1) [ G ] + 2[ G ][ ˜ G ] (cid:0) ∂ [ G ] (cid:1) + (cid:0) ∂ [ G ] (cid:1)(cid:0) ∂ [ ˜ G ] (cid:1) [ G ] + 3 (cid:0) ∂ [ G ] (cid:1) [ ˜ G ] (cid:0) ∂ [ G ] (cid:1) + [ G ] (cid:0) ∂ [ ˜ G ] (cid:1)(cid:0) ∂ [ G ] (cid:1) + 3[ G ][ ˜ G ][ G ][ ˜ G ][ G ] (cid:17) . As a special case, [ G ](0 , x, t ) = g (0 , x, t ) satisfies the matrix equation: ( ∂ t − µ ∂ − µ ∂ − µ ∂ ) g = 2 µ g ˜ gg + 3 µ (cid:0) ( ∂g )˜ gg + g ˜ g ( ∂g ) (cid:1) + 2 µ (cid:0) ∂ g )˜ gg + g ( ∂ ˜ g ) g + 2 g ˜ g ( ∂ g )+ ( ∂g )( ∂ ˜ g ) g + 3( ∂g )˜ g ( ∂g ) + g ( ∂ ˜ g )( ∂g )+ 3 g ˜ gg ˜ gg (cid:1) . In particular, a consistent choice for ˜ P is ˜ P = P † , whence ˜ G = G † and ˜ g = g † .Proof. Recall
P U = V P and ˜
P V = U ˜ P from Corollary 2.5. We split the proof intothe following steps. Step 1: Applying the linear dispersion operator to G = P U . With G = P U , usingthe Leibniz rule, that P satisfies the linear operator equation in Definition 2.7 andalso the identities for ∂U , ∂ U and so forth from Lemma 2.4, we find: ∂ t G − µ ∂ G − µ ∂ G − µ ∂ G = ( ∂ t P ) U − P U ( ∂Q ) U − µ (cid:0) ( ∂ P ) U + 2( ∂P )( ∂U ) + P ( ∂ U ) (cid:1) ON-COMMUTATIVE QUINTIC NLS EQUATION 15 − µ (cid:0) ( ∂ P ) U + 3( ∂ P )( ∂U ) + 3( ∂P )( ∂ U ) + P ( ∂ U ) (cid:1) − µ (cid:0) ( ∂ P ) U + 4( ∂ P )( ∂U ) + 6( ∂ P )( ∂ U ) + 4( ∂P )( ∂ U ) + P ( ∂ U ) (cid:1) = − P U ( ∂ t Q − µ ∂ Q − µ ∂ Q − µ ∂ Q ) U + 2 µ (cid:0) ( ∂P ) U ( ∂Q ) P + P ( ∂U )( ∂Q ) U (cid:1) + 3 µ (cid:0) ( ∂ P ) U ( ∂Q ) U + 2( ∂P )( ∂U )( ∂Q ) U + ( ∂P ) U ( ∂ Q ) U + P ( ∂ U )( ∂Q ) U + P ( ∂U )( ∂ Q ) U (cid:1) + 2 µ (cid:0) ∂ P ) U ( ∂Q ) U + 6( ∂ P )( ∂U )( ∂Q ) U + 3( ∂ P ) U ( ∂ Q ) U + 6( ∂P )( ∂ U )( ∂Q ) U + 6( ∂P )( ∂U )( ∂ Q ) U + 2( ∂P ) U ( ∂ Q ) U + 2 P ( ∂ U )( ∂Q ) U + 3 P ( ∂ U )( ∂ Q ) U + 2 P ( ∂U )( ∂ Q ) U (cid:1) . For the moment we focus on the first term on the right just above. Using ˜ µ = − µ ,˜ µ = µ and ˜ µ = − µ , we observe: ∂ t Q − µ ∂ Q − µ ∂ Q − µ ∂ Q = ( ∂ t ˜ P ) P + ˜ P ( ∂ t P ) − µ (cid:0) ( ∂ ˜ P ) P + 2( ∂ ˜ P )( ∂P ) + ˜ P ( ∂ P ) (cid:1) − µ (cid:0) ( ∂ ˜ P ) P + 3( ∂ ˜ P )( ∂P ) + 3( ∂ ˜ P )( ∂ P ) + ˜ P ( ∂ P ) (cid:1) − µ (cid:0) ( ∂ ˜ P ) P + 4( ∂ ˜ P )( ∂P ) + 6( ∂ ˜ P )( ∂ P ) + 4( ∂ ˜ P )( ∂ P ) + ˜ P ( ∂ P ) (cid:1) = − µ ∂ (cid:0) ( ∂ ˜ P ) P (cid:1) − µ ∂ (cid:0) ( ∂ ˜ P )( ∂P ) (cid:1) − µ ∂ (cid:0) ( ∂ ˜ P ) P + ( ∂ ˜ P )( ∂P ) + 2( ∂ ˜ P )( ∂ P ) (cid:1) . Our proof now proceeds as follows. We substitute this last expression into the firstterm on the right in the previous equation and apply the kernel bracket operator,treating the coefficients of µ , µ and µ separately. Step 2: Terms involving µ . Applying the kernel bracket operator to these termson the right in Step 1, modulo 2 µ we have: (cid:2) P U ∂ (cid:0) ( ∂ ˜ P ) P (cid:1) U + ( ∂P ) U ( ∂Q ) P + P ( ∂U )( ∂Q ) U (cid:3) = [ P U ( ∂ ˜ P ) + ( ∂P ) U ˜ P + P ( ∂U ) ˜ P ][ P U ]= (cid:2) ∂ ( P U ˜ P ) (cid:3) [ G ]= [ G ][ ˜ G ][ G ] , where we used the result (i) from the key identities Lemma 2.6. We have thusgenerated the term involving µ on the right stated in the Theorem. Step 3: Terms involving µ . Applying the bracket operator to these terms onthe right in Step 1, using ∂ Q = ∂ (cid:0) ( ∂ ˜ P ) P + ˜ P ( ∂P ) (cid:1) , then modulo 3 µ , we observe: (cid:2) P U ∂ (cid:0) ( ∂ ˜ P )( ∂P ) (cid:1) U + ( ∂ P ) U ( ∂Q ) U + 2( ∂P )( ∂U )( ∂Q ) U + ( ∂P ) U ( ∂ Q ) U + P ( ∂ U )( ∂Q ) U + P ( ∂U )( ∂ Q ) U (cid:3) = (cid:2) ( ∂ P ) U ˜ P + 2( ∂P )( ∂U ) ˜ P + ( ∂P ) U ( ∂ ˜ P ) + P ( ∂ U ) ˜ P + P ( ∂U )( ∂ ˜ P ) (cid:3) [ P U ]+ (cid:2) P U ( ∂ ˜ P ) + ( ∂P ) U ˜ P + P ( ∂U ) ˜ P (cid:3)(cid:2) ( ∂P ) U (cid:3) . The pre-factors of (cid:2) ( ∂P ) U (cid:3) simplify to (cid:2) ∂ ( P U ˜ P ) (cid:3) = [ G ][ ˜ G ] using the key identitiesLemma 2.6. Note (cid:2) ( ∂P ) U (cid:3) itself is given by (cid:2) ( ∂P ) U (cid:3) = (cid:2) ∂ ( P U ) (cid:3) − (cid:2) P ( ∂U ) (cid:3) = ∂ [ G ] + (cid:2) P U ( ∂Q ) U (cid:3) = ∂ [ G ] + [ P U ˜ P ][ G ] . The pre-factors of [
P U ] from the first expression of this step simplify to (cid:2) ∂ ( P U ˜ P ) − ( ∂P ) U ( ∂ ˜ P ) − P ( ∂U )( ∂ ˜ P ) − P U ( ∂ ˜ P )( ∂ ˜ P ) (cid:3) = (cid:2) ∂ ( P U ˜ P ) (cid:3) − (cid:2) ∂ (cid:0) P U ( ∂ ˜ P ) (cid:1)(cid:3) = ∂ (cid:0) [ G ][ ˜ G ] (cid:1) − [ G ] (cid:0) ∂ [ ˜ G ] (cid:1) − [ G ][ ˜ G ][ P U ˜ P ]= (cid:0) ∂ [ G ] (cid:1) [ ˜ G ] − [ G ][ ˜ G ][ P U ˜ P ] , using the key identities Lemma 2.6. Combining these last two results we see theterms on the right in the first expression of this step are:[ G ][ ˜ G ] (cid:0) ∂ [ G ] + [ P U ˜ P ][ G ] (cid:1) + (cid:0) ( ∂ [ G ])[ ˜ G ] − [ G ][ ˜ G ][ P U ˜ P ] (cid:1) [ G ] , which simplify to the terms involving 3 µ on the right stated in the Theorem. Step 4: Terms involving µ . In practice we split our computation for theseterms into several successive steps. We apply the kernel bracket operator to theseterms on the right in Step 1 and use ∂ Q = ∂ (cid:0) ( ∂ ˜ P ) P + ˜ P ( ∂P ) (cid:1) as well as ∂ Q = ∂ (cid:0) ( ∂ ˜ P ) P + 2( ∂ ˜ P )( ∂P ) + ˜ P ( ∂ P ) (cid:1) . Collating terms with respective post-factors (cid:2) ( ∂ P ) U (cid:3) , (cid:2) ( ∂P ) U (cid:3) and [ P U ], then modulo 2 µ , we observe: (cid:2) ∂ P ) U ( ∂Q ) U + 6( ∂ P )( ∂U )( ∂Q ) U + 3( ∂ P ) U ( ∂ Q ) U + 6( ∂P )( ∂ U )( ∂Q ) U + 6( ∂P )( ∂U )( ∂ Q ) U + 2( ∂P ) U ( ∂ Q ) U + 2 P ( ∂ U )( ∂Q ) U + 3 P ( ∂ U )( ∂ Q ) U + 2 P ( ∂U )( ∂ Q ) U (cid:3) = 2 (cid:2) P U ( ∂ ˜ P ) + P ( ∂U ) ˜ P + ( ∂P ) U ˜ P (cid:3)(cid:2) ( ∂ P ) U (cid:3) + (cid:2) P U ( ∂ ˜ P ) + 4 P ( ∂U )( ∂ ˜ P ) + 3 P ( ∂ U ) ˜ P + 4( ∂P ) U ( ∂ ˜ P )+ 6( ∂P )( ∂U ) ˜ P + 3( ∂ P ) U ˜ P (cid:3)(cid:2) ( ∂P ) U (cid:3) + (cid:2) P U ( ∂ ˜ P ) + 2 P ( ∂U )( ∂ ˜ P ) + 3 P ( ∂ U )( ∂ ˜ P ) + 2 P ( ∂ U ) ˜ P + 2( ∂P ) U ( ∂ ˜ P ) + 6( ∂P )( ∂U )( ∂ ˜ P ) + 6( ∂P )( ∂ U ) ˜ P + 3( ∂ P ) U ( ∂ ˜ P ) + 6( ∂ P )( ∂U ) ˜ P + 2( ∂ P ) U ˜ P (cid:3) [ P U ] . Observe the pre-factor of the term (cid:2) ( ∂ P ) U (cid:3) is 2 (cid:2) ∂ ( P U ˜ P ) (cid:3) = 2[ G ][ ˜ G ] using thekey identities Lemma 2.6. Note the factor (cid:2) ( ∂ P ) U (cid:3) itself, using inverse operatorLemma 2.4, is given by (cid:2) ( ∂ P ) U (cid:3) = (cid:2) ∂ ( P U ) (cid:3) − (cid:2) ( ∂P )( ∂U ) (cid:3) − (cid:2) P ( ∂ U ) (cid:3) = ∂ [ G ] + 2 (cid:2) ( ∂P ) U ( ∂Q ) U (cid:3) + 2 (cid:2) P ( ∂U )( ∂Q ) U (cid:3) + (cid:2) P U ( ∂ Q ) U (cid:3) = ∂ [ G ] + 2 (cid:2) ( ∂P ) U ˜ P ][ P U ] + 2 (cid:2) P ( ∂U ) P (cid:3) [ P U ]+ (cid:2) P U ( ∂ ˜ P ) (cid:3) [ P U ] + (cid:2)
P U ˜ P (cid:3)(cid:2) ( ∂P ) U (cid:3) . Note the final term on the right just above, using the second relation from Step 3,is given by (cid:2)
P U ˜ P (cid:3)(cid:2) ( ∂P ) U (cid:3) = (cid:2) P U ˜ P (cid:3) ∂ [ G ] + (cid:2) P U ˜ P (cid:3) [ G ] = (cid:2) P U ˜ P (cid:3) ∂ [ G ] − (cid:2) P ( ∂U ) ˜ P (cid:3) [ G ] . If we substitute this into the previous result we find, using the key identitiesLemma 2.6: (cid:2) ( ∂ P ) U (cid:3) = ∂ [ G ] + (cid:2) ∂ ( P U ˜ P ) (cid:3) [ G ] + (cid:2) ( ∂P ) U ˜ P (cid:3) [ G ] + [ P U ˜ P ] ∂ [ G ]= ∂ [ G ] + [ G ][ ˜ G ][ G ] + (cid:2) ( ∂P ) U ˜ P (cid:3) [ G ] + [ P U ˜ P ] ∂ [ G ] . ON-COMMUTATIVE QUINTIC NLS EQUATION 17
Hence the terms with post-factor (cid:2) ( ∂ P ) U (cid:3) become2[ G ][ ˜ G ] (cid:0) ∂ [ G ] + [ G ][ ˜ G ][ G ] + (cid:2) ( ∂P ) U ˜ P (cid:3) [ G ] + [ P U ˜ P ] ∂ [ G ] (cid:1) . Step 5: Terms involving µ with post-factor (cid:2) ( ∂P ) U (cid:3) . We consider the termswith post-factor (cid:2) ( ∂P ) U (cid:3) from the first relation in Step 4. Modulo 2 µ the termsconcerned equal (cid:2) ∂ (cid:0) P U ˜ P (cid:1) − P U ( ∂ ˜ P ) − P ( ∂U )( ∂ ˜ P ) − ∂P ) U ( ∂ ˜ P ) (cid:3) whichsimplify to:3 ∂ (cid:0) [ G ][ ˜ G ] (cid:1) − (cid:2) ∂ (cid:0) P U ( ∂ ˜ P ) (cid:1)(cid:3) = 3 ∂ (cid:0) [ G ][ ˜ G ] (cid:1) − (cid:0) [ G ] ∂ [ ˜ G ] + [ G ][ ˜ G ][ P U ˜ P ] (cid:1) = 3 (cid:0) ∂ [ G ] (cid:1) [ ˜ G ] + [ G ] (cid:0) ∂ [ ˜ G ] (cid:1) − G ][ ˜ G ][ P U ˜ P ] , where we used the key identities Lemma 2.6. Since from the second relation fromStep 3 we know (cid:2) ( ∂P ) U (cid:3) = ∂ [ G ] + (cid:2) P U ˜ P (cid:3) [ G ], the terms with post-factor (cid:2) ( ∂P ) U (cid:3) from the first relation in Step 4 become (cid:16) (cid:0) ∂ [ G ] (cid:1) [ ˜ G ] + [ G ] (cid:0) ∂ [ ˜ G ] (cid:1) − G ][ ˜ G ][ P U ˜ P ] (cid:17)(cid:0) ∂ [ G ] + [ P U ˜ P ][ G ] (cid:1) . Step 6: Terms involving µ with post-factor ∂ [ G ] . We deal with the terms withpost-factor [
P U ] = [ G ] from the first relation in Step 4 in Step 7 momentarily.However from Steps 4 and 5, besides the single term with post-factor ∂ [ G ] in thelast expression in Step 4, all the other terms will have post-factors ∂ [ G ] or [ G ]. Ifwe collect the terms from the very final relations in both Steps 4 and 5, then theterms with post-factor ∂ [ G ] simplify to (cid:16) (cid:0) ∂ [ G ] (cid:1) [ ˜ G ] + [ G ] (cid:0) ∂ [ ˜ G ] (cid:1) ] (cid:17) ∂ [ G ] . Step 7: Terms involving µ with post-factor [ G ] . There are terms with post-factor [ G ] from the final relations in both Steps 4 and 5, which we re-introducepresently in Step 8. However momentarily we focus on the terms with post-factor[ G ] from the first relation in Step 4. Modulo 2 µ the terms concerned are equal to (cid:2) ∂ ( P U ˜ P ) − P U ( ∂ ˜ P ) − P ( ∂U )( ∂ ˜ P ) − P ( ∂ U )( ∂ ˜ P ) − ∂P ) U ( ∂ ˜ P ) − ∂P )( ∂U )( ∂ ˜ P ) − ∂ P ) U ( ∂ ˜ P ) (cid:3) = 2 ∂ (cid:0) [ G ][ ˜ G ] (cid:1) − (cid:2) P U ( ∂ ˜ P ) + 4 (cid:0) ∂ ( P U ) (cid:1) ( ∂ ˜ P ) + 3 (cid:0) ∂ ( P U ) (cid:1) ( ∂ ˜ P ) (cid:3) = 2 ∂ (cid:0) [ G ][ ˜ G ] (cid:1) − ∂ (cid:2)(cid:0) P U ( ∂ ˜ P ) (cid:1)(cid:3) + 2 (cid:2) P U ( ∂ ˜ P ) (cid:3) + 2 (cid:2)(cid:0) ∂ ( P U ) (cid:1) ( ∂ ˜ P ) (cid:3) = 2 ∂ (cid:0) [ G ][ ˜ G ] (cid:1) − ∂ (cid:0) [ G ] ∂ [ ˜ G ] (cid:1) − (cid:0) [ G ][ ˜ G ] (cid:1) − ∂ (cid:0) [ G ][ ˜ G ] (cid:1) [ P U ˜ P ] + 2 ∂ (cid:2) P U ( ∂ ˜ P ) (cid:3) , where we used relation (v) in the key identities Lemma 2.6 in the final step as wellas combined the final two terms on the right. Step 8: Combining all terms involving µ . We now combine all the terms involv-ing µ together. These are all the terms on the right in the final relation in Step 7,for which we need to include the post-factor [ G ], the two terms from the final ex-pression in Step 6 which have the post-factor ∂ [ G ], all the terms with post-factor[ G ] from the final expression in Step 5, and finally all the terms with post-factor[ G ] as well as the single term with post-factor ∂ [ G ] from the final expression in Step 4. Modulo 2 µ , using that [ P U ˜ P ] = (cid:2) P U ∂ ( ˜
P P ) U ˜ P (cid:3) = − [ P ( ∂U ) ˜ P ], theseterms combine to give:2[ G ][ ˜ G ] ∂ [ G ] − (cid:0) [ G ][ ˜ G ] (cid:1) [ G ] + 3 (cid:0) ∂ [ G ] (cid:1) [ ˜ G ] ∂ [ G ]+[ G ] (cid:0) ∂ [ ˜ G ] (cid:1) ∂ [ G ] + 2 ∂ (cid:0) [ G ][ ˜ G ] (cid:1) [ G ] − ∂ (cid:0) [ G ] ∂ [ ˜ G ] (cid:1) [ G ]+2 (cid:16) [ G ][ ˜ G ] (cid:2)(cid:0) ∂ ( P U ) (cid:1) ˜ P ] − [ G ] (cid:0) ∂ [ ˜ G ] (cid:1) [ P U ˜ P ] + ∂ (cid:2) P U ( ∂ ˜ P ) (cid:3)(cid:17) [ G ] . Substituting the result (iii) from the key identities Lemma 2.6 and combining liketerms then gives the first statement of the theorem.
Step 9: Remaining statements.
The second statement is a special case thatfollows by setting y = z = 0 in the first statement. For the third statement, ifwe suppose ˜ P = P † , then U = (id + P † P ) − = U † and ˜ G = P † (id + P P † ) − =(id + P † P ) − P † = U P † = ( P U ) † = G † . (cid:3) Judicious choices for ˜ P generate reverse space-time and reverse time nonlocalversions of the quintic NLS equations stated in Theorem 3.1 as follows. Corollary 3.2 (Reverse space-time nonlocal matrix quintic NLS equation) . Sup-pose µ = 0 and recall we assume µ and µ are pure imaginary parameters. Ifwe choose ˜ P ( x, t ) = P T ( − x, − t ) , where P T is the operator whose matrix kernelis the transpose of the matrix kernel corresponding to P , then the matrix kernel [ G ] = [ G ]( y, z ; x, t ) satisfies, for every t ∈ [0 , T ] , the reverse space-time nonlocal ma-trix quintic NLS kernel equation given by the equation for [ G ] stated in Theorem 3.1with ˜ G ( x, t ) = G T ( − x, − t ) . Further, setting y = z = 0 generates the reverse space-time nonlocal matrix quintic NLS equation for g = g (0 , x, t ) stated in Theorem 3.1with ˜ g (0 , x, t ) = g T (0 , − x, − t ) . Similarly if we choose ˜ P ( x, t ) = P T ( x, − t ) thenwe generate the corresponding reverse time only nonlocal equations.Proof. Recall since ˜ µ = − µ and ˜ µ = − µ the operators P and ˜ P satisfy therespective linear PDEs ∂ t P = µ ∂ P + µ ∂ P and ∂ t ˜ P = − µ ∂ ˜ P − µ ∂ ˜ P . Thechoice ˜ P ( x, t ) = P T ( − x, − t ) is consistent with these two equations. Recall G = P U and ˜ G = ˜ P V where U = (id + ˜ P P ) − and V = (id + P ˜ P ) − . If we substitute˜ P ( x, t ) = P T ( − x, − t ) into these expressions for G and ˜ G we observe G T ( − x, − t ) = (cid:0) id + P T ( − x, − t ) P ( x, t ) (cid:1) − P T ( − x, − t ) , while ˜ G ( x, t ) = P T ( − x, − t ) (cid:0) id + P ( x, t ) P T ( − x, − t ) (cid:1) − . We deduce ˜ G ( x, t ) = G T ( − x, − t ) and the reverse space-time results follow. Thereverse time only result follows immediately. (cid:3) Remark . Suppose in the final statement of The-orem 3.1 we instead make the choice ˜ P = − P † . This choice is still consistent withthe linear partial differential equations satisfied by P and ˜ P in the linear operatorsystem in Definition 2.7, with ˜ µ = − µ , ˜ µ = µ and ˜ µ = − µ and µ and µ chosen pure imaginary while µ chosen to be real. With this choice for ˜ P weobserve V = (id − P P † ) − = V † and thus ˜ G = ˜ P V = − P † V = − ( V P ) † = − G † .Hence with the choice ˜ P = − P † , we generate the equations for [ G ] and g shown inTheorem 3.1 but with ˜ G = − G † instead. This has the effect of changing the signof all the degree three terms while the sign of the quintic term is unchanged. ON-COMMUTATIVE QUINTIC NLS EQUATION 19
Remark . For Theorem 3.1 we assumed µ , µ ∈ i R and µ ∈ R to ensure d = d ( ∂ ) satisfied the dispersion property in Definition 2.8.This ensures suitable regularity for the kernel function p = p ( x, t ) which solvesthe underlying linear partial differential equation. That in turn ensures a suitablesolution to the linear Fredholm equation for G ; recall the discussion at the endof Section 2. If the parameters µ , µ and µ are more general so the dispersionproperty does not hold, then establishing suitable regularity for p = p ( x, t ) requiresfurther investigation. With regards the choices of the parameters ˜ µ j we made, for j = 2 , ,
4, we in principle could choose these differently to ˜ µ j = ( − j − µ j . Forexample, the choice ˜ µ = − µ appears to be consistent and may lead to differentequations. 4. Numerical simulations
We present numerical simulations for the commutative version of the fourthorder quintic nonlinear Schr¨odinger equation from Theorem 3.1. The commutativeversion of these equations appear most frequently in practice in the modelling ofultrashort pulse propagation in optical fibres; see for example Kang et al. [35], Guo et al. [30] or Wang et al. [63]. We chose the parameter values µ = − i, µ = 1 and µ = i for the purposes of simulation. We provide two independent simulations togenerate approximations to the solution of the commutative version of the equationfor g = g (0 , x, t ) in Theorem 3.1 as follows: (i) Direct numerical simulationbased on an adaptation of a well-known spectral algorithm. This advances theapproximate solution in Fourier space, denoted by u m , in successive time steps;(ii) Generation of an approximation to the solution, denoted by ˆ g , by using theGrassmann–P¨oppe method, i.e. based on the analytical linearisation approach wehave outlined in Sections 2 and 3. For both numerical methods we chose an initialprofile function p = p ( x ) on the interval [ − L/ , L/
2] for some fixed domain length
L >
0. Recall from Definition 2.7, assuming we set ˜ P = P † , then given the Hankeloperator P = P ( x, t ) or equivalently its kernel p = p ( y + z + x ; t ), the functions q = q ( y, z ; x, t ) and g = g ( y, z ; x, t ) are prescribed by the equations q ( y, z ; x, t ) = Z − L/ p † ( y + ζ + x ; t ) p ( ζ + z + x ; t ) d ζ and p ( y + z + x ; t ) = g ( y, z ; x, t ) + Z − L/ g ( y, ζ ; x, t ) q ( ζ, z ; x, t d ζ. Note in this commutative setting with p scalar, the quantity p † = p ∗ in the definitionfor q above is just the complex conjugate of p . In our simulations we chose the initialprofile p ( x ) = 0 .
15 sech( x/
40) with L = 40.First, let us outline the direct numerical simulation method. The first taskin this method is to compute the initial data g . To achieve this we computethe initial auxiliary data function q = q ( y, z ; x ) and then initial data profile g = g ( y, z ; x ) by numerically solving the equations just above with p , q and g replaced respectively by p , q and g . Note q is just defined in terms of the datafunction p and the integral involved is approximated using a left-hand Riemannsum. On the other hand we must solve the second equation above for g . We achievethis by approximating the integral again by a left-hand Riemann sum and solving the resulting large linear algebraic system of equations. Hence, we generated theapproximation ˆ g = ˆ g ( y, z ; x ) from the profile for p stated using the procedurejust outlined. We set the initial data for the direct numerical method u to be theFourier transform of ˆ g (0 , x ).The Fourier spectral method we used to advance the commutative fourth orderquintic nonlinear Schr¨odinger solution forward in time is as follows. Suppose Ψ =Ψ( g, ∂g, ∂ g ) denotes the quintic nonlinear function of g , ∂g and ∂ g with ˜ g = g ∗ shown on the right-hand side in the evolution equation for g given in Theorem 3.1.Naturally in the current commutative setting, the functional form for Ψ shown canbe further simplified. The Fourier spectral split-step method we use is given by: v m = exp (cid:0) ∆ t ( µ K + µ K + µ K ) (cid:1) u m ,u m +1 = v m + ∆ t F (cid:16) Ψ (cid:0) F − ( v m ) , F − ( Kv m ) , F − ( K v m ) (cid:1)(cid:17) , where F denotes the Fourier transform and K is the diagonal matrix of Fouriercoefficients 2 π i k . In practice we use the fast Fourier transform and its inverse. Wechose ∆ t = 0 .
001 and the number of Fourier modes is 2 . The result is shown inthe top two panels in Figure 1.Second, in the Grassmann–P¨oppe method, given the initial profile p = p ( x )we analytically advance the Fourier transform F ( p ) of the initial data in Fourierspace to any time of interest t ∈ [0 , T ], where T = 100. To generate p = p ( x, t )at that time, we then compute the inverse Fourier transform. In other words wecompute the approximation ˆ p ( x, t ) = F − (exp( t ( µ K + µ K + µ K )) F ( p ))for any given time t . We then compute an approximation ˆ q = ˆ q ( y, z ; x, t ) at thetime t by approximating the integral in the prescription for q above by a left-handRiemann sum. To generate an approximation ˆ g for g = g ( y, z ; x, t ) at that timewe approximate the integral in the linear Fredholm integral equation prescribing g above by a left-hand Riemann sum and solve the resulting linear algebraic system ofequations for ˆ g = ˆ g ( y, z ; x, t ). An approximation to the solution of the commutativefourth order quintic nonlinear Schr¨odinger equation is then ˆ g (0 , x, t ). The resultis shown in the middle two panels in Figure 1. The magnitude of the differencebetween ˆ g (0 , x, t ) and u m is shown in the lower left panel. The lower right panelshows the magnitude of the Fredholm determinant det(id + ˆ Q ( x, t )), where ˆ Q =ˆ Q ( x, t ) is the linear operator associated with the kernel function ˆ q = ˆ q ( y, z ; x, t ). Remark . Our main result in Theorem 3.1 con-cerned the fourth order quintic nonlinear Schr¨odinger equation on the real line.The numerical simulations above are based on Fourier spectral approximations onthe domain [ − L/ , L/
2] with periodic boundary conditions. Indeed in the firststep of the Grassmann–P¨oppe method above we generate approximate solutionsto the linear “base” partial differential equation by taking the inverse fast Fouriertransform of the exact spectral representation for the solution. Our numerical sim-ulations shown in Figure 1 demonstrate the Grassmann–P¨oppe method appearsto work perfectly well in the periodic context. However this does require furtherinvestigation, both analytically and numerical analytically.
Remark . This Grassmann–P¨oppe method was used to generateapproximate solutions to the standard KdV and NLS equations in Doikou et al. [17].
Remark . In principle we could generate the numerical solutionsusing the Grassmann–P¨oppe approach from given initial data g = g ( x ) for the ON-COMMUTATIVE QUINTIC NLS EQUATION 21 -3 ab s o l u t e d i ff e r en c e Difference y x
40 02.5 20 -100 -20 9.19.21009.39.4 80 209.5 | de t e r m i nan t | Magnitude of the Determinant t x
40 09.9 20 -100 -20
Figure 1.
We plot the solution to the commutative version ofthe fourth order quintic nonlinear Sch¨odinger equation from The-orem 3.1. We chose the parameter values µ = − i, µ = 1 and µ = i. The top panels show the real and imaginary parts com-puted using a direct integration approach, while the middle panelsshow the corresponding real and imaginary parts computed usingthe Grassmann–P¨oppe method. The bottom left panel shows themagnitude of the difference between the two computed solutions.The bottom right panel shows the evolution of the magnitude ofthe Fredholm determinant of Q = Q ( x, t ). kernel function g = g (0 , x, p for the linearised partial differ-ential system for p = p ( x, t ) can be computed from g via ‘scattering’, as suggestedfor example by McKean [41]. Remark . We emphasise the follow-ing efficiency property of the Grassmann–P¨oppe method. Given the initial data p ,we compute its fast Fourier transform ˆ p = ˆ p ( k ), for a finite set of wavenumbers k . We then advance the individual k -modes of ˆ p to any given time t > (cid:0) ∆ t ( µ (2 π i k ) + µ (2 π i k ) + µ (2 π i k ) ) (cid:1) , corresponding to thelinear partial differential equation prescribing p = p ( x, t ). We can thus generate theFourier coefficients ˆ p = ˆ p ( k, t ) at any given time in one single step. We generate anapproximation for p = p ( x, t ) by then computing the inverse fast Fourier transform.And then finally we can generate g = g ( y, z ; x, t ) by solving the linear Fredholmequation at that time t >
0, as described above. In contrast, the direct numericalsimulation method requires computing the solution over successive small time stepsto evaluate the solution at any time t >
Discussion
The advantages of the method we present to establish integrability for the gen-eralised non-commutative fourth order quintic NLS equation, based on P¨oppe’sHankel operator approach are as follows. First, the method is abstract. Once theFredholm equation P = G (id + Q ) is established the computation proceeds entirelyat the operator level. The key initial ingredients are that the scattering data P is aHankel operator and depends on the parameters x and t , and satisfies an evolutionin t linear equation involving a derivation operation with respect to x . And thenthe auxiliary data Q is assigned appropriately in terms of P . Second, with this inhand, we can proceed in the operator algebra, to which a derivation operation canbe applied, once we endow that algebra with a kernel product rule associated withthe Hankel components. The procedure to establish a closed-form nonlinear kernelequation is then direct and elementary, only requiring basic calculus.The observant reader will have noticed in the proof of Theorem 3.1 that, oncewe computed ∂ t G − d ( ∂ ) G in Step 1, the remaining Steps 2–9 in the proof were acollating exercise, once we applied the kernel bracket operator at the very beginningof Step 2. Indeed retrospectively we note the following. Step 2 dealt with the termswith factor µ , i.e. those associated with the second order part of d ( ∂ ). The keyidentity in Step 2 helping to establish a closed nonlinear form is identity (i) inLemma 2.6 for ∂ [ P U ˜ P ]. Step 3 dealt with the terms with factor µ , i.e. thoseassociated with the third order part of d ( ∂ ). The key identity in Step 3 thatestablished a closed nonlinear form is identity (ii) in Lemma 2.6 for ∂ [ P U ( ∂ ˜ P )], inaddition to identity (i). Then in Steps 4–8, which dealt with the terms with factor µ , the key identity was (iii) in Lemma 2.6 for ∂ [ P U ( ∂ ˜ P )], in addition to theprevious two. Indeed the main work in establishing our main result in Theorem 3.1was the proof of identities (i)–(iii) in Lemma 2.6. It is likely the proof of theLemma can be simplified further. This suggests a key identity for the quintic ordercase, i.e. when the order of d ( ∂ ) is five, will involve an analogous expression for ∂ [ P U ( ∂ ˜ P )]. This is the last case presented in Nijhoff et al. [45]. An explicitclosed-form expression for all orders is obviously of interest, perhaps via a non-commutative generalisation of the recursion relation for the NLS hierarchy; see forexample P¨oppe [50] or Matveev and Smirnov [40]. ON-COMMUTATIVE QUINTIC NLS EQUATION 23
Some final observations are as follows. In Section 2 we discussed how a solution tothe linear Fredholm equation for G exists provided the determinant det(id+ Q ) = 0.This is guaranteed locally in time under the conditions stated therein. Recall Q is prescribed directly and solely in terms of P and P † . Hence the evolution of P and thus the determinant det(id + Q ) determines the existence of G . If the de-terminant becomes zero at some time t then the solution G may become singular.More specifically, depending on the route to singularity, certain eigenvalues of G will become singular; see Beck and Malham [8]. However, such singular behaviourin the context of Grassmannian flows simply indicates a poor choice of represen-tative coordinate patch. By changing to another suitable patch, which is alwayspossible, the solution can be continued. A careful analysis of this scenario is re-quired. Additionally a careful numerical analysis of the Grassmann–P¨oppe methodwe presented in Section 4 is also required. Acknowledgement
SJAM would like to thank Ioannis Stylianidis for stimulating discussions.
References [1] A. Abbondandolo, P. Majer, Infinite dimensional Grassmannians,
J. Operator Theory (1),19–62 (2009).[2] M.J. Ablowitz, Z.H. Musslimani, Integrable Nonlocal Nonlinear Equations, Stud. Appl. Math. (1) (2017).[3] M.J. Ablowitz, B. Prinari, D. Trubatch,
Discrete and Continuous Nonlinear Schr¨odinger Sys-tems , Cambridge University Press (2004).[4] M.J. Ablowitz, A. Ramani, H. Segur, A connection between nonlinear evolution equations andordinary differential equations of P-type. II,
Journal of Mathematical Physics , 1006–1015(1980).[5] G.P. Agrawal, Applications of nonlinear fiber optics , Academic Press (2001).[6] E. Andruchow, G. Larotonda, Lagrangian Grassmannian in infinite dimension,
Journal ofGeometry and Physics , 306–320 (2009).[7] W. Bauhardt, Ch. P¨oppe, The Zakharov–Shabat inverse spectral problem for operators, J.Math, Phys. (7), 3073–3086 (1993).[8] M. Beck, S.J.A. Malham, Computing the Maslov index for large systems, PAMS , 2159–2173 (2015).[9] M. Beck, A. Doikou, S.J.A. Malham, I. Stylianidis, Grassmannian flows and applications tononlinear partial differential equations, Proc. Abel Symposium (2018).[10] M. Beck, A. Doikou, S.J.A. Malham, I. Stylianidis, Partial differential systems with nonlocalnonlinearities: generation and solutions,
Phil. Trans. R. Soc. A (2117) (2018).[11] M. Ben–Artzi, H. Koch, J.C. Saut, Dispersion estimates for fourth order Schr¨odinger equa-tions,
C.R. Math. Acad. Sci. S´er. 1 , 87–92 (2000).[12] G. Blower, S. Newsham, On tau functions associated with linear systems, Operator the-ory advances and applications: IWOTA Lisbon 2019. ed. Amelia Bastos; Luis Castro; AlexeiKarlovich. Springer Birkh¨auser, 2020. (International Workshop on Operator Theory and Ap-plications).[13] T. Boulenger, E. Lenzmann, Blowup for biharmonic NLS, arXiv:1503.01741v2, (2015).[14] A. Degasperis, S. Lombardo, Multicomponent integrable wave equations: II. Soliton solutions,
J. Phys. A: Math. Theor. (38) (2009).[15] S.T. Demiray, Y. Pandir, H. Bulut, All exact travelling wave solutions of Hirota equation andHirota–Maccari system, Optik , 1848–1859 (2016).[16] A. Doikou, S.J.A. Malham, I. Stylianidis, Grassmannian flows and applications to non-commutative non-local and local integrable systems,
Physica D , 132744 (2021).[17] A. Doikou, S.J.A. Malham, I. Stylianidis, A. Wiese, Applications of Grassmannian flows tononlinear systems, submitted (2020). [18] F.J. Dyson, Fredholm determinants and inverse scattering problems,
Commun. Math. Phys. , 171–183 (1976).[19] N. Ercolani, H.P. McKean, Geometry of KDV (4): Abel sums, Jacobi variety, and thetafunction in the scattering case, Invent. Math. , 483—544 (1990).[20] G. Fibich, B. Ilan, G. Papanicolaou, Self-focusing with fourth order dispersion, SIAM J.Appl. Math. (4), 1437–1462 (2002).[21] A.S. Fokas, Integrable multidimensional versions of the nonlocal nonlinear Schr¨odinger equa-tion, Nonlinearity , 319–324 (2016).[22] A.S. Fokas, M.J. Ablowitz, Linearization of the Kortweg de Vries and Painlev´e II Equations, Phys. Rev. Lett. , 1096 (1981).[23] A. S. Fokas, B. Pelloni, Unified Transform for Boundary Value Problems: Applications andAdvances , Society for Industrial and Applied Mathematics (2014).[24] A.P. Fordy, P.P. Kulish, Nonlinear Schr¨odinger Equations and Simple Lie Algebras,
Commun.Math. Phys. , 427–443 (1983).[25] G.G. Grahovski, A.J. Mohammed, H. Susanto, Nonlocal Reductions of the Ablowitz-LadikEquation, Theor. Math. Phys. , 1412–1429 (2018).[26] S. Grellier, P. Gerard, The cubic Szeg¨o equation and Hankel operators,
Ast´erisque (2017),Soci´et´e Math´ematique de France, Paris.[27] S. Grudsky, A. Rybkin, On classical solutions of the KdV equation,
Proc. London Math. Soc. , 354–371 (2020).[28] S. Grudsky, A. Rybkin, Soliton theory and Hankel operators,
SIAM J. Math. Anal. ,2283–2323 (2015).[29] Y.-Y. Guan, B. Tian, H.-L. Zhen, Y.-F. Wang, J. Chai, Soliton solutions of a generalisednonlinear Schr¨odinger–Maxwell–Bloch system in the erbium-doped optical fibre,
Zeitschriftf¨ur Naturforschung (3), doi.org/10.1515/zna-2015-0466 (2016).[30] R. Guo, H.-Q. Hao, X.-S. Gu, Modulation instability, breathers, and bound solitons in anerbium-doped fiber system with higher-order effects, Abstract and Applied Analysis ,dx.doi.org/10.1155/2014/185654 (2014).[31] M. G¨urses, A. Pekcan, Superposition of the coupled NLS and MKdV Systems,
Appl. Math.Lett. , 157–163 (2019).[32] M. G¨urses, A. Pekcan, Nonlocal modified KdV equations and their soliton solutions, Com-mun. Nonlinear Sci. Numer. Simul. , 427–448 (2019).[33] M. G¨urses, A. Pekcan, Nonlocal nonlinear Schrodinger equations and their soliton solutions, J. Math. Phys. , 051501 (2018).[34] M. G¨urses, A. Pekcan, Nonlocal KdV equations, arXiv:2004.07144 (2020).[35] Z.-Z. Kang, T.-C. Xia, W.-X. Ma, Riemann–Hilbert approach and N -soliton solution foran eighth-order nonlinear Schr¨odinger equation in an optical fiber, Advances in DifferenceEquations , doi.org/10.1186/s13662-019-2121-5 (2019).[36] V.I. Karpman, Stabilization of soliton instabilities by higher order dispersion: Fourth-ordernonlinear Schr¨odinger-type equations,
Phys. Rev. E , 1336-1339 (1996).[37] V.I. Karpman, A.G. Shagalov, Stability of solitons described by nonlinear Schr¨odinger-typeequations with higher-order dispersion, Phys. D , 194–210 (2000).[38] C. Kwak, Periodic fourth order cubic NLS: Local well-posedness and non-squeezing property,arXiv:1708.00127v2, (2018).[39] S.V. Manakov, On the theory of two-dimensional stationary self-focusing of electromagneticwaves,
Sov. Phys. - JETP (2), 248–253 (1974).[40] V.B. Matveev, A.O. Smirnov, AKNS and NLS hierarchies, MRW solutions, P n breathers,and beyond, Journal of Mathematical Physics , 091419 (2018).[41] H.P. McKean, Fredholm determinants, Cent. Eur. J. Math. , 205–243 (2011).[42] D. Mihalache, N.-C. Panoiu, F. Moldoveanu, D.-M. Baboiu, The Riemann–Hilbert problemmethod for solving a perturbed nonlinear Schr¨odinger equation describing pulse propagationin optic fibres, J. Phys. A: Math. Gen. , 6177–6189 (1994).[43] D. Mumford Tata lectures on Theta II, Modern Birkhauser Classics (1984).[44] K. Nakkeeran, Optical solitons in erbium doped fibers with higher order effects, PhysicsLetters A , 415–418 (2000).[45] F.W. Nijhoff, G.R.W. Quispel, J. Van Der Linden, H.W. Capel, On some linear integralequations generating solutions of nonlinear partial differential equations,
Physica A , 101–142 (1983).
ON-COMMUTATIVE QUINTIC NLS EQUATION 25 [46] T. Oh, Y. Wang, Global well-posedness of the periodic cubic fourth order NLS in negativeSobolev spaces, arXiv:1707.02013v2, (2018).[47] B. Pausader, The cubic fourth order Schr¨odinger equation,
Journal of Functional Analysis , 2473–2517 (2009).[48] D.E. Pelinovsky, Y.A. Stepanyants, Helical solitons in vector modified Korteweg-de Vriesequations,
Physics Letters A , 3165–3171 (2018).[49] C. P¨oppe, Construction of solutions of the sine-Gordon equation by means of Fredholmdeterminants,
Physica D , 103–139 (1983).[50] C. P¨oppe, The Fredholm determinant method for the KdV equations, Physica D , 137–160(1984).[51] C. P¨oppe, General determinants and the τ function for the Kadomtsev-Petviashvili hierarchy, Inverse Problems , 613–630 (1984).[52] C. P¨oppe, D.H. Sattinger, Fredholm determinants and the τ function for the Kadomtsev-Petviashvili hierarchy, Publ. RIMS, Kyoto Univ. , 505–538 (1988).[53] I. Posukhovskyi, A. Stefanov, On the normalized ground states for the Kawahara equationand a fourth order NLS, arXiv:1711.00367v3, (2020).[54] A. Pressley, G. Segal, Loop groups , Oxford Mathematical Monographs, Clarendon Press,Oxford (1986).[55] Y. Ren, Z.-Y. Yang, C. Liu, W.-H. Xu, W.-L. Yang, Characteristics of optical multi-peaksolitons induced by higher-order effects in an erbium-doped fiber system,
Eur. Phys. J. D ,DOI: 10.1140/epjd/e2016-70079-7 (2016).[56] M. Sato, Soliton equations as dynamical systems on a infinite dimensional Grassmann man-ifolds. RIMS , 30–46 (1981).[57] M. Sato, The KP hierarchy and infinite dimensional Grassmann manifolds,
Proceedings ofSymposia in Pure Mathematics Part 1, 51–66 (1989).[58] G. Segal, G. Wilson, Loop groups and equations of KdV type,
Inst. Hautes Etudes Sci. Publ.Math. , 5-–65 (1985).[59] Simon B 2005 Trace ideals and their applications , 2nd edn. Mathematical Surveys and Mono-graphs, vol. 120. Providence, RI: AMS.[60] I. Stylianidis,
Grassmannian flows: applications to PDEs with local and nonlocal nonlinear-ities , PhD Thesis, in preparation (2020).[61] L. Wang, S. Li, F.-H. Qi, Breather-to-soliton and rogue wave-to-soliton transitions in a reso-nant erbium-doped fiber system with higher-order effects,
Nonlinear Dyn. , 389–398 (2016).[62] L. Wang, X. Wu, H.-Y. Zhang, Superregular breathers and state transitions in a resonanterbium-doped fiber system with higher-order effects, Physics Letters A , 2650–2654 (2018).[63] L.H. Wang, K. Porsezian, J.S. He, Breather and rogue wave solutions of a generalized non-linear Schr¨odinger equation,
Physical Review E , 053202 (2013).[64] Q.-M. Wang, Y.-T. Gao, C.-Q. Su, D.-W. Zuo, Solitons, breathers and rogue waves for ahigher order nonlinear Schr¨odinger–Maxwell–Bloch system in an erbium-doped fiber system, Phys. Scr. , 105202 (2015).[65] V.E. Zakharov, A.B. Shabat, A scheme for integrating the nonlinear equations of mathemat-ical physics by the method of the inverse scattering problem I, Funct. Anal. Appl. , 226–235(1974).[66] V.E. Zakharov, A.B. Shabat, Integration of nonlinear equations of mathematical physics bythe method of inverse scattering II, Funct. Anal. Appl. (3), 166—174 (1979). Maxwell Institute for Mathematical Sciences, and School of Mathematical andComputer Sciences, Heriot-Watt University, Edinburgh EH14 4AS
Email address ::