Apparent Singularities of D-finite Systems
aa r X i v : . [ c s . S C ] M a y Apparent Singularities of D-finite Systems
Shaoshi Chen ∗ , Manuel Kauers † , Ziming Li ‡ and Yi Zhang § KLMM, AMSS, Chinese Academy of Sciences, Beijing, China
2, 4
Institute for Algebra, Johannes Kepler University Linz, Austria RICAM, Austrian Acedemy of Sciences, Austria
Abstract
We generalize the notions of singularities and ordinary points fromlinear ordinary differential equations to D-finite systems. Ordinary pointsof a D-finite system are characterized in terms of its formal power seriessolutions. We also show that apparent singularities can be removed like inthe univariate case by adding suitable additional solutions to the systemat hand. Several algorithms are presented for removing and detectingapparent singularities. In addition, an algorithm is given for computingformal power series solutions of a D-finite system at apparent singularities.
Ordinary linear differential equations allow easy access to the singularities oftheir solutions: every point α which is a singularity of some solution f of thedifferential equation must be a zero of the coefficient of the highest order deriva-tive appearing in the equation, or a singularity of one of the other coefficients.For example, x − is a solution of the equation xf ′ ( x ) + f ( x ) = 0, and the singu-larity at 0 is reflected by the root of the polynomial x in front of the term f ′ ( x )in the equation. Unfortunately, the converse is not true: there may be roots ofthe leading coefficient which do not indicate solutions that are singular there.For example, all the solutions of the equation xf ′ ( x ) − f ( x ) = 0 are constantmultiples of x , and none of these functions is singular at 0.For a differential equation p ( x ) f ( x )+ · · · + p r ( x ) f ( r ) ( x ) = 0 with polynomialcoefficients p , . . . , p r and p r = 0, the roots of p r are called the singularities ofthe equation. Those roots α of p r such that the equation has no solution thatis singular at α are called apparent. In other words, a root α of p r is apparentif the differential equation admits r linearly independent formal power seriessolutions in x − α . Deciding whether a singularity is apparent is therefore the ∗ Supported by the NSFC grant 11501552 and by the President Fund of the Academy ofMathematics and Systems Science, CAS (2014-cjrwlzx-chshsh). Email: [email protected] † Supported by the Austrian Science Fund (FWF): F50-04 and Y464-N18. Email:[email protected] ‡ Supported by the NSFC grants (91118001, 60821002/F02) and a 973 project(2011CB302401). Email: [email protected] § Supported by the Austrian Science Fund (FWF): Y464-N18 and P29467-N32. Email:[email protected] indicial polynomial of the equation at α : if there exists a power series solutionof the form ( x − α ) ℓ + · · · , then ℓ is a root of this polynomial.When some singularity α of an ODE is apparent, then it is always possibleto construct a second ODE whose solution space contains all the solutions ofthe first ODE, and which does not have α as a singularity. This process is calleddesingularization. The idea is easily explained. The key observation is that apoint α is a singularity if and only if the indicial polynomial at α is differentfrom n ( n − · · · ( n − r + 1) or the ODE does not admit r linearly independentformal power series solutions in x − α . As the indicial polynomial at an apparentsingularity has only nonnegative integer roots, we can bring it into the requiredform by adding a finite number of new factors. Adding a factor n − s to theindicial polynomial amounts to adding a solution of the form ( x − α ) s + · · · tothe solution space, and this is an easy thing to do using well-known arithmeticof differential operators. See [1, 4, 7, 12, 13] for an expanded version of thisargument and [1, 2] for analogous algorithms for recurrence equations.The purpose of the present paper is to generalize the two facts sketchedabove to the multivariate setting. Instead of an ODE, we consider systems ofPDEs known as D-finite systems. For such systems, we define the notion of asingularity in terms of the polynomials appearing in them (Definition 3.1). Weshow in Theorem 3.4 that a point is a singularity of the system unless it admits abasis of power series solutions in which the starting terms are as small as possiblewith respect to some term order. Then a singularity is apparent if the systemadmits a full basis of power series solutions, the starting terms of which are not assmall as possible. We then prove in Theorem 4.6 that apparent singularities canbe removed like in the univariate case by adding suitable additional solutionsto the system at hand. The resulting system will be contained in the Weylclosure [23] of the original ideal, but unlike Tsai [23] we cannot guarantee thatit is equal to the Weyl closure. Based on Theorem 3.4 and Theorem 4.6, we showhow to remove a given apparent singularity (Algorithms 5.10 and 5.19), and howto detect whether a given point is an apparent singularity (Algorithm 5.13). Atlast, we present an algorithm for computing formal power series solutions of aD-finite system at apparent singularities. In this section, we recall some notions and conclusions concerning linear partialdifferential operators, Gr¨obner bases, formal power series, solution spaces andWronskians for D-finite systems. We also specify notation to be used in the restof this paper.
Throughout the paper, we assume that K is a field of characteristic zero and n is a positive integer. For instance, K can be the field of complex numbers. Let K [ x ] = K [ x , . . . , x n ] be the ring of usual commutative polynomials over K .The quotient field of K [ x ] is denoted by K ( x ). Then we have the ring of dif-ferential operators with rational function coefficients K ( x )[ ∂ , . . . , ∂ n ], in which2ddition is coefficient-wise and multiplication is defined by associativity via thecommutation rules(i) ∂ i ∂ j = ∂ j ∂ i ;(ii) ∂ i f = f ∂ i + ∂f∂x i for each f ∈ K ( x ),where ∂f∂x i is the usual derivative of f with respect to x i , 1 ≤ i, j ≤ n . This ringis an Ore algebra [21, 9] and denoted by K ( x )[ ∂ ] for brevity.Another ring is K [ x ][ ∂ ] := K [ x , . . . , x n ][ ∂ , . . . , ∂ n ], which is a subringof K ( x )[ ∂ ]. We call it the ring of differential operators with polynomial co-efficients or the Weyl algebra [22, Section 1.1].A left ideal I in K ( x )[ ∂ ] is called D-finite if the quotient K ( x )[ ∂ ] /I is a finitedimensional vector space over K ( x ). The dimension of K ( x )[ ∂ ] /I as a vectorspace over K ( x ) is called the rank of I and denoted by rank( I ).For a subset S of K ( x )[ ∂ ], the left ideal generated by S is denoted by K ( x )[ ∂ ] S .For instance, let I = Q ( x , x )[ ∂ , ∂ ] { ∂ − , ∂ − } . Then I is D-finite becausethe quotient Q ( x , x )[ ∂ , ∂ ] /I is a vector space of dimension 1 over Q ( x , x ).Thus, rank( I ) = 1. Gr¨obner bases in K ( x )[ ∂ ] are well known [15] and implementations for them areavailable for example in the Maple package Mgfun [8] and in the Mathematicapackage
HolonomicFunctions.m [18]. We briefly summarize some facts aboutGr¨obner bases.We denote by T( ∂ ) the commutative monoid generated by ∂ , . . . , ∂ n . Anelement of T( ∂ ) is called a term . For a vector u = ( u , . . . , u n ) ∈ N n , thesymbol ∂ u stands for the term ∂ u · · · ∂ u n n . The order of ∂ u is defined to be | u | := u + · · · + u n . For a nonzero operator P ∈ K ( x )[ ∂ ], the order of P isdefined to be the highest order of the terms that appear in P effectively.Let ≺ be a graded monomial ordering [10, Definition 1, page 55] on N n .Since there is a one-to-one correspondence between terms in T( ∂ ) and elementsin N n , the ordering ≺ on N n induces an ordering on T( ∂ ) with ∂ u ≺ ∂ v if u ≺ v . Our main results on apparent singularities are based on the fact thatthere are at most finitely many terms lower than a given term. So we fix agraded ordering ≺ on N n in the rest of the paper.For a nonzero element P ∈ K ( x )[ ∂ ], the head term of P , denoted by HT( P ),is the highest term appearing in P . The coefficient of HT( P ) is called the head coefficient of P and is denoted by HC( P ). For a subset S of nonzeroelements in K ( x )[ ∂ ], HT( S ) and HC( S ) stand for the sets of head terms andhead coefficients of the elements in S , respectively.For a Gr¨obner basis G in K ( x )[ ∂ ], a term is said to be parametric if it is notdivisible by any term in HT( G ). The set of exponents of all parametric termsis referred to as the set of parametric exponents of G and denoted by PE( G ).If K ( x )[ ∂ ] G is D-finite, then its rank is also called the rank of G and denotedby rank( G ), which is equal to | PE( G ) | . In examples, we use the graded lexicographic ordering with ∂ n ≻ · · · ≻ ∂ . .3 Formal power series Let K [[ x ]] be the ring of formal power series with respect to x , . . . , x n . For P ∈ K [ x ][ ∂ ] and f ∈ K [[ x ]], there is a natural action of P on f , which isdenoted by P ( f ). For P, Q ∈ K [ x ][ ∂ ], it is straightforward to verify that P Q ( f ) = P ( Q ( f )) . (1)For u = ( u , . . . , u n ) ∈ N n , the product ( u !) · · · ( u n !) is denoted by u !, and x u · · · x u n n by x u . A formal power series can always be written in the form f = X u ∈ N n c u u ! x u , where c u ∈ K . Such a form is convenient for differentiation.Taking the constant term c of a formal power series f gives rise to a ringhomomorphism, which is denoted by φ . A direct calculation yields φ ( ∂ u ( f )) = c u . (2)Thus, we can determine whether a formal power series is zero by differentiatingand taking constant terms, as stated in the next lemma. Lemma 2.1.
Let f ∈ K [[ x ]] . Then f = 0 if and only if, for all u ∈ N n , φ ( ∂ u ( f )) = 0 . The following result appears in [11] for s = 1. But the proof applies liter-ally also for arbitrary values of s . Please see [25, Lemma 4.3.3] for a detailedverification. Lemma 2.2.
Let p , p , . . . , p s and q be polynomials in K [ x ] with gcd( p , p , . . . , p s , q ) = 1 . If p i /q has a power series expansion for each i ∈ { , , . . . , s } , then the constantterm of q is nonzero. The fixed ordering ≺ on N n also induces an ordering on the monoid T( x )generated by x , . . . , x n in the following manner: x u ≺ x v if u ≺ v . A nonzeroelement f ∈ K [[ x ]] can be written as f = c u u ! x u + higher monomials with respect to ≺ , where c u ∈ K is nonzero. We call u the initial exponent of f . Basic facts about solutions of linear partial differential polynomials are presentedin [16, Chapter IV, Section 5]. We recall them in terms of D-finite ideals. Thefirst proposition is a special case of Proposition 2 in [16, page 152].
Proposition 2.3.
For a left ideal I ⊂ K ( x )[ ∂ ] with rank d , there exists a differ-ential field E containing K [[ x ]] such that the solution space of I has dimension d over C E , where C E stands for the subfield of the constants in E . E is a differential field described in the above proposition. For a D-finite ideal I ,the solution space of I in E is denoted by sol E ( I ). Likewise, for a finite-rankGr¨obner basis G , the solution space of K ( x )[ ∂ ] G is simply denoted by sol E ( G ).The next proposition is an analog of differential Nullstellensatz for D-finiteideals. It is an easy consequence of Corollary 1 in [16, page 152]. Proposition 2.4.
Let V ⊂ E be a d -dimensional linear subspace over C E . Thenthere exists a unique left ideal I ⊂ E [ ∂ ] of rank d such that V = sol E ( I ) . Furthermore, an operator P belongs to I if and only if P annihilates everyelement of V . Linear dependence over constants can be determined by Wronskian-like de-terminants [16, Chapter II, Theorem 1], which implies that a finite number ofelements in K [[ x ]] are linearly independent over K if and only if they are linearlyindependent over any field of constants that contains K .Wronskian-like determinants are expressed by elements of T( ∂ ) via wedgenotation in [19]. For v , v , . . . , v ℓ ∈ N n and ℓ ∈ Z + , the exterior product ∂ v ∧ ∂ v ∧ · · · ∧ ∂ v ℓ is defined as a multi-linear function from E ℓ to E that maps ( z , . . . , z ℓ ) ∈ E ℓ to: (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ∂ v ( z ) ∂ v ( z ) · · · ∂ v ( z ℓ ) ∂ v ( z ) ∂ v ( z ) · · · ∂ v ( z ℓ )... ... . . . ... ∂ v ℓ ( z ) ∂ v ℓ ( z ) · · · ∂ v ℓ ( z ℓ ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) . It follows from Theorem 1 in [16, Chapter II] that z , . . . , z ℓ are linearly inde-pendent over C E if there exist v , . . . , v ℓ ∈ N n such that( ∂ v ∧ · · · ∧ ∂ v ℓ )( z , . . . , z ℓ ) = 0 . Let G be a reduced and finite-rank Gr¨obner basis in K ( x )[ ∂ ] , the Wronskianoperator of G is defined to be w G := ^ u ∈ PE( G ) ∂ u . The following proposition is Lemma 4 in [19] in slightly different notation.
Proposition 2.5.
Let d = rank( G ) , z , . . . , z d ∈ sol E ( G ) and PE( G ) = { u , . . . , u d } .(i) The elements z , . . . , z d are linearly independent over C E if and only if w G ( z , . . . , z d ) is nonzero.(ii) Let ∂ v be the head term of an element g of G , and let z , . . . , z d be linearlyindependent over C E . Set z = ( z , . . . , z d ) and w G ∧ ∂ v ( z , · ) = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ∂ u ( z ) ∂ u ( z ) · · · ∂ u ( z d ) ∂ u ∂ u ( z ) ∂ u ( z ) · · · ∂ u ( z d ) ∂ u ... ... ... ... ∂ u d ( z ) ∂ u d ( z ) · · · ∂ u d ( z d ) ∂ u d ∂ v ( z ) ∂ v ( z ) · · · ∂ v ( z d ) ∂ v (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) , n which the elements of T( ∂ ) are placed on the right-hand side of a prod-uct. Then w G ( z ) − ( w G ∧ ∂ v ( z , · )) = HC( g ) − g. The two propositions listed above will be used to reconstruct a Gr¨obner basisfrom its solutions.
Let P ∈ K [ x ][ ∂ ] be in the form P = c u m ∂ u m + c u m − ∂ u m − + · · · + c u ∂ u , where c u , . . . , c u m ∈ K [ x ] \ { } and ∂ u . . . , ∂ u m are distinct. We say that P is primitive if gcd( c u , c u , . . . , c u m ) = 1 . For brevity, a Gr¨obner basis G in K ( x )[ ∂ ] is said to be primitive if it isfinite, reduced and its elements are primitive ones in K [ x ][ ∂ ]. Every nontrivialleft ideal in K ( x )[ ∂ ] has a primitive Gr¨obner basis. The goal of this section is tocharacterize ordinary points of a primitive Gr¨obner basis of finite rank in termsof formal power series solutions. Our definitions of singularities and ordinary points are motivated by the materialafter [22, Lemma 1.4.21].
Definition 3.1.
Assume that G is a primitive Gr¨obner basis of finite rank. Apoint α ∈ K n is called an ordinary point of G if, for every p ∈ HC( G ) , p ( α ) = 0 .Otherwise, it is called a singularity of G . The above definitions are compatible with those in the univariate case [1, 7].Note that the origin is an ordinary point of G if and only if each element ofHC( G ) has a nonzero constant term. Example 3.2.
Consider the Gr¨obner basis in Q ( x , x )[ ∂ , ∂ ] G = { ∂ − ∂ , ∂ + 1 } . We find that
HT( G ) = { ∂ , ∂ } and HC( G ) = { } . So G has no singularity. Example 3.3.
Consider the Gr¨obner basis [19, Example 3] in Q ( x , x )[ ∂ , ∂ ] G = { x ∂ − ( x x − ∂ − x , x ∂ − x ∂ } . In this case,
HT( G ) = { ∂ , ∂ } and HC( G ) = { x , x } and PT( G ) = { , ∂ } .The singularities of G are { ( a, b ) ∈ Q | a = 0 or b = 0 } , which are two lines in Q . In particular, the origin is a singularity. .2 Characterization of ordinary points From now on, we focus on formal power series solutions of a primitive Gr¨obnerbasis around the origin, as a point in K n can always be translated to the origin,and we may assume that K is algebraically closed when necessary. Theorem 3.4.
Let G be a primitive Gr¨obner basis of finite rank. Then theorigin is an ordinary point of G if and only if G has rank( G ) many K -linearlyindependent formal power series solutions whose initial exponents are exactlythose in PE( G ) .Proof. Let G = { G , . . . , G k } , ∂ v i = HT( G i ) and ℓ i = HC( G i ), i = 1 , . . . , k .Then we can write G i = ℓ i ∂ v i + a linear combination of parametric terms over K [ x ] Necessity.
Assume that the origin is an ordinary point of G . Then none ofthe ℓ i ’s vanishes at the origin. We show how to construct formal power seriessolutions of G by an approach described in [24].We associate to each tuple u ∈ PE( G ) an arbitrary constant c u ∈ K . For anon-parametric term ∂ v , let N v be its normal form with respect to G . Although N v belongs to K ( x )[ ∂ ], there exists a power product ℓ v of ℓ , . . . , ℓ k such that ℓ v N v ∈ K [ x ][ ∂ ]. Write ℓ v ( x ) N v = X u ∈ PE( G ) a u , v ( x ) ∂ u with a u , v ∈ K [ x ]. Set c v = ℓ v ( ) − X u ∈ PE( G ) a u , v ( ) c u . (3)Note that ℓ v can be chosen to be any power product of ℓ , . . . , ℓ k such that ℓ v N v belongs to K [ x ][ ∂ ]. Let f = X u ∈ N n c u u ! x u . We claim that f is a formal power series solution of G , that is, G i ( f ) = 0 , i = 1 , . . . , k. (4)By (1) and Lemma 2.1, it suffices to prove φ ( ∂ u G i ( f )) = 0 (5)for all u ∈ N n and i ∈ { , . . . , k } . We proceed by Noetherian induction on theterm order ≺ .Starting with ∂ , we can write ∂ G i = G i = ℓ i ( x ) ∂ v i − X u ∈ PE( G ) a u , v i ( x ) ∂ u , (6)where a u , v i ∈ K [ x ]. It follows that ℓ i ( x ) N v i = X u ∈ PE( G ) a u , v i ( x ) ∂ u .
7y (3), ℓ i ( ) c v i − X u ∈ PE( G ) a u , v i ( ) c u = 0 , which can be rewritten as φ ( ℓ i ( x )) φ ( ∂ v i ( f )) − X u ∈ PE( G ) φ ( a u , v i ( x )) φ ( ∂ u ( f )) = 0 . Since φ is a ring homomorphism, we have φ ℓ i ( x ) ∂ v i ( f ) − X u ∈ PT( G ) a u , v i ( x ) ∂ u ( f ) = 0 . We see that φ ( G i ( f )) = 0 by (6).Assume that ∂ v is a term higher than ∂ and, for all w with w ≺ v and all i ∈ { , . . . , k } , φ ( ∂ w G i ( f )) = 0 . Reducing ∂ v + v i modulo G , we have ℓ ( x ) ∂ v + v i = p v ( x )( ∂ v G i ) + X w ≺ v k X s =1 p w ,s ( x )( ∂ w G s ) ! + ℓ ( x ) N v + v i , where ℓ ( x ) and p v ( x ) are two power products of ℓ ( x ) , . . . , ℓ k ( x ), and p w ,s ( x )belongs to K [ x ] for all w ≺ v and s ∈ { , . . . , k } . Moreover, ℓ ( x ) N v + v i belongsto K [ x ][ ∂ ]. Applying the above equality to f , we get ℓ ( x ) ∂ v + v i ( f ) = p v ( x )( ∂ v G i )( f )+ X w ≺ v k X s =1 p w ,s ( x )( ∂ w G s )( f ) ! + ℓ ( x ) N v + v i ( f ) . Applying φ to the above equality yields φ (cid:0) ℓ ( x ) ∂ v + v i ( f ) (cid:1) = p v ( ) φ ( ∂ v G i ( f )) + P w ≺ v P ks =1 p w ,s ( ) φ ( ∂ w G s ( f ))+ φ (( ℓ ( x ) N v + v i ) ( f )) . By the induction hypothesis, φ ( ∂ w G s ( f )) = 0 for all w with w ≺ v and for all s ∈ { , . . . , k } . Thus, φ (cid:0) ℓ ( x ) ∂ v + v i ( f ) (cid:1) = p v ( ) φ ( ∂ v G i ( f )) + φ (( ℓ ( x ) N v + v i ) ( f )) . Writing ℓ ( x ) N v + v i = P u ∈ PE( G ) a u , v + v i ( x ) ∂ u with a u , v + v i ( x ) ∈ K [ x ], we seethat the above equality implies ℓ ( ) c v + v i = p v ( ) φ ( ∂ v G i ( f )) + X u ∈ PE( G ) a u , v + v i ( ) c u . It follows from (3) that p v ( ) φ ( ∂ v G i ( f )) = 0 . p v ( ) is nonzero, φ ( ∂ v G i ( f )) is equal to zero. This proves (5). Therefore,our claim (4) holds. Since there are rank( G ) many parametric terms, the D-finite system G has rank( G ) many K -linearly independent formal power seriessolutions with initial exponents in PE( G ). Sufficiency.
Let d = rank( G ). Assume that f , . . . , f d are K -linearly indepen-dently formal power series solutions of G whose the initial exponents are u ,. . . , u d , respectively. Without loss of generality, we assume further that, forall j, m ∈ { , . . . , d } , φ ( ∂ u m ( f j )) = δ mj , where δ mj stands for Kronecker’s symbol.Let f = ( f , . . . , f d ). By the above assumption, the constant term of w G ( f )is nonzero. So the formal power series w G ( f ) is invertible in K [[ x ]].Let F i = ( w L ∧ ∂ v i )( f , · ). By Proposition 2.5,1 ℓ i G i = w G ( f ) − F i ∈ K [[ x ]][ ∂ ] . (7)Since G i is primitive, we can write G i as ℓ i ∂ v i + d X j =1 ℓ ij ∂ u j , where ℓ ij ∈ K [ x ] and gcd( ℓ i , ℓ i , . . . , ℓ id ) = 1. By (7), we have ℓ ij ℓ i ∈ K [[ x ]] for each j = 1 , . . . , d. It follows from Lemma 2.2 that the constant term of ℓ i is nonzero. Hence, theorigin is an ordinary point of G .The proof for the necessity of the above theorem also holds for an arbitraryleft (not necessarily D-finite) ideal K ( x )[ ∂ ] G , provided that the origin is anordinary point of G . In addition, the above theorem also holds when the fixedordering ≺ is not graded. But the results in the next section hinge on theassumption that ≺ is graded. The goal of this section is to define apparent singularities of a primitive Gr¨obnerbasis of finite rank, and to characterize them.
Definition 4.1.
Let G be a primitive Gr¨obner basis of rank d .(i) Assume that the origin is a singularity of G . We call the origin an appar-ent singularity of G if G has d linearly independent formal power seriessolutions over K .(ii) Assume that M is a primitive Gr¨obner basis of finite rank. We call M a left multiple of G if K ( x )[ ∂ ] M ⊂ K ( x )[ ∂ ] G. Example 4.2.
The solution space sol E ( G ) of the Gr¨obner basis G = { x ∂ + ∂ − x − , ∂ − ∂ } in K ( x , x )[ ∂ , ∂ ] is generated by { exp( x + x ) , x exp( x ) } . In this case, HT( G ) = { ∂ , ∂ } , HC( G ) = { x , } and PE( G ) = { (0 , , (1 , } . Therefore, the origin is a singularity of G . As G has two K -linearly independentformal power series solutions, the origin is an apparent singularity of G .Let M be another Gr¨obner basis such that K ( x )[ ∂ ] · M = K ( x )[ ∂ ] G ∩ K ( x )[ ∂ ] · { x ∂ − , ∂ } . We find that M is a left multiple of G with rank . Example 4.3.
The solution space sol E ( G ) of the Gr¨obner basis G = { x ∂ − x ∂ + x − x , ∂ } in K ( x , x )[ ∂ , ∂ ] is generated by { x + x , x x } . In this case, HT( G ) = { ∂ , ∂ } , HC( G ) = { x , } and PE( G ) = { (0 , , (1 , } . Therefore, the origin is an apparent singularity of G .Set S = { (0 , , (0 , , (2 , , (0 , } . Let M be another Gr¨obner basis with K ( x )[ ∂ ] · M = K ( x )[ ∂ ] G ∩ \ ( s,t ) ∈ S K ( x )[ ∂ ] · { x ∂ − s, x ∂ − t } We find that rank ( M ) = 6 . By Definition 4.1 (ii), M is a left multiple of G with rank( M ) = 6 . For a subset S of K ( x )[ ∂ ], we denote by IE ( S ) the set of initial exponentsof nonzero elements in sol E ( S ) ∩ K [[ x ]] and call it the set of initial exponents of S at the origin. Then | IE ( S ) | is the dimension of sol E ( S ) ∩ K [[ x ]], because any setof formal power series with distinct initial exponents are linearly independentover K . For a primitive Gr¨obner basis G , the origin is an ordinary point of G if and only if IE ( G ) = PE( G ) by Theorem 3.4, it is an apparent singularity ifand only if IE ( G ) = PE( G ) but | IE ( G ) | = | PE( G ) | by Definition 4.1.Before characterizing apparent singularities, we prove two lemmas. Theresults in the first lemma are likely known, but we were not able to find properreferences containing them. Lemma 4.4.
Let I and J be D -finite ideals in K ( x )[ ∂ ] . Then(i) rank( I ∩ J ) + rank( I + J ) = rank( I ) + rank( J ) . (ii) dim sol E ( I ∩ J ) + dim sol E ( I + J ) = dim sol E ( I ) + dim sol E ( J ) . iii) sol E ( I ∩ J ) = sol E ( I ) + sol E ( J ) .Proof. Let V be a vector space over any field, and let U and W be two subspacesof V . Set ψ : V / ( U ∩ W ) → V /U × V /Wv + U ∩ W ( v + U, − v + W ) , and φ : V /U × V /W → V / ( U + W )( a + U, b + W ) a + b + ( U + W ) . It is straightforward to verify that the following sequence is exact .0 → V / ( U ∩ W ) ψ −→ V /U × V /W φ −→ V / ( U + W ) → . It follows that
V /U × V /W is linearly isomorphic to
V / ( U ∩ W ) ⊕ V / ( U + W ) . In particular,dim(
V / ( U ∩ W )) + dim( V / ( U + W )) = dim( V /U ) + dim(
V /W ) . Setting V = K ( x )[ ∂ ], U = I and W = J , we prove the first assertion. Thesecond assertion follows from the first one and Proposition 2.3.For the last assertion, it is evident that sol E ( I ) + sol E ( J ) ⊂ sol E ( I ∩ J ). Onthe other hand,dim(sol E ( I ) + sol E ( J )) = dim(sol E ( I )) + dim(sol E ( J )) − dim(sol E ( I ) ∩ sol E ( J )) . = dim(sol E ( I )) + dim(sol E ( J )) − dim(sol E ( I + J ))(since sol E ( I ) ∩ sol E ( J ) = sol E ( I + J ))= dim(sol E ( I ∩ J )) (by the second assertion).Hence, sol E ( I ∩ J ) = sol E ( I ) + sol E ( J ).As a matter of notation, we define N nm = { u ∈ N | | u | = m } for m ∈ N .The second lemma illustrates a connection between parametric exponents andinitial ones. Lemma 4.5.
Let M be a primitive Gr¨obner basis. Assume that sol E ( M ) has abasis in K [[ x ]] and IE ( M ) = N nm for some m ∈ N . Then IE ( M ) = PE( M ) .Consequently, the origin is an ordinary point of M .Proof. Assume that f , . . . f ℓ ∈ K [[ x ]] form a basis of sol E ( M ) and their initialexponents are distinct. Then ℓ = | N nm | . Let f = ( f , . . . , f ℓ ). And set w = ^ u ∈ IE ( M ) ∂ u . Then w ( f ) is a nonzero element in K [[ x ]]. For every v ∈ N n with | v | = m + 1, let F v = ( w M ∧ ∂ v )( f , · ), which belongs to K [[ x ]][ ∂ ]. Then HT( F v ) = ∂ v because w ( f ) is nonzero and the ordering ≺ is graded. Since F v annihilates f , . . . , f ℓ , We thank Professor Yang Han for bring this exact sequence to our attention, whichshortens our original proof.
11t vanishes on sol E ( M ). It follows from Proposition 2.4 that F v belongs tothe extended ideal E [ ∂ ] · M , in which M is still a Gr¨obner basis. Thus, F v can be reduced to zero by M . Accordingly, ∂ v is not a parametric derivativeof M . In other words, PE( M ) is a subset of N nm . Hence, PE( M ) = N nm because | PE( M ) | = ℓ and ℓ = | N nm | . The origin is an ordinary point by Theorem 3.4. Theorem 4.6.
Let G be a primitive Gr¨obner basis of rank d . Assume that theorigin is a singularity of G . Then the origin is an apparent singularity of G ifand only if it is an ordinary point of some left multiple of G .Proof. Sufficiency. Assume that the origin is an apparent singularity of G .Set m = max u ∈ IE ( G ) | u | . For every v = ( v , . . . , v n ) ∈ N n , we denote by I v the left ideal generated by x ∂ − v , . . . , x n ∂ n − v n in K ( x )[ ∂ ]. Note that thesolution space of I v is spanned by x v . Set I = K ( x )[ ∂ ] G and J = \ v ∈ N nm \ IE ( G ) I v . Then the two left ideals I and J have no solution in common except the trivialone because v ∈ N nm \ IE ( G ). It follows from Lemma 4.4 (iii) thatsol E ( I ∩ J ) = sol E ( I ) ⊕ sol E ( J ) . In particular, sol E ( I ∩ J ) has dimension | N nm | over C E , because sol E ( I ) and sol E ( J )have dimensions | IE ( G ) | and | N nm |−| IE ( G ) | , respectively. So IE ( I ∩ J ) = N nm .Let M be a primitive Gr¨obner basis of I ∩ J . Then the origin is an ordinarypoint of M by Lemma 4.5. Necessity.
Assume that M is a left multiple of G and that the origin is anordinary point of M . Then we have that sol E ( G ) ⊂ sol E ( M ). We need to provethat sol E ( G ) has a basis in K [[ x ]].Assume that { f , . . . , f ℓ } ⊂ K [[ x ]] is a basis of sol E ( M ). Since sol E ( G )is contained in sol E ( M ), every element of sol E ( G ) is a linear combination of f , . . . , f ℓ over C E . Assume that f = z f + . . . + z ℓ f ℓ , where z , . . . , z ℓ ∈ C E areto be determined. Let G = { G , . . . , G k } . The constraints G j ( f ) = 0 , j = 1 , . . . , k, on f are equivalent to the constraints z G j ( f ) + · · · + z ℓ G j ( f ℓ ) = 0 , j = 1 , . . . , k, on constants z , . . . , z ℓ . By comparing the coefficients of x w ( w ∈ N n ) inboth sides of the above equations, we derive a linear system A z = , where thematrix A has infinitely many rows but ℓ columns, and z stands for the transposeof ( z , . . . , z ℓ ). Moreover, both G , . . . , G ℓ and f , . . . , f ℓ have coefficients in K .So A is a matrix over K . We have that f ∈ sol E ( G ) ⇐⇒ A z = . (8)Set ker( A ) = (cid:8) ( c , . . . , c ℓ ) t | A ( c , . . . , c ℓ ) t = (0 , . . . , t , c , . . . , c ℓ ∈ C E (cid:9) , · ) t stands for the transpose of a vector (matrix). Since f , . . . , f ℓ arelinearly independent over C E , the linear independence of the elements in sol E ( G )over C E is equivalent to the linear independence of the corresponding vectors inker( A ) over C E . Thus, the dimension of ker( A ) over C E is equal to dim C E sol E ( G ),which is d by Proposition 2.3. It follows that the rank of A is equal to ℓ − d .Since all the coefficients of A lie in K , there are d vectors in the intersectionof ker( A ) and K ℓ , which are linearly independent over K . These vectors arealso linearly independent over C E . These vectors give rise to a basis of sol E ( G )in K [[ x ]]. The origin is an apparent singularity of G by Theorem 3.4.Assume that the origin is a singularity of G . By desingularizing the origin,we mean computing a left multiple M of G such that the origin is an ordinarypoint of M . The next corollary helps us to desingularize an apparent singularity. Corollary 4.7.
Let G be a primitive Gr¨obner basis of finite rank. Assume thatthe origin is an apparent singularity of G . Set m = max u ∈ IE ( G ) | u | . The theorigin is an ordinary point of a primitive Gr¨obner basis of the left ideal K ( x )[ ∂ ] G ∩ \ ( v ,...,v n ) ∈ N nm \ IE ( G ) K ( x )[ ∂ ] { x ∂ − v , . . . , x n ∂ n − v n } . Proof.
It is direct from the proof on the sufficiency of the above theorem.
Let G be a primitive Gr¨obner basis with the origin being an apparent singularity.We want to compute a left multiple M of G such that the origin is an ordinarypoint of M . To this end, we need to study IE ( G ) by Corollary 4.7. We extend the notion of indicial polynomials for linear ordinary differentialoperators to the D-finite case.Let δ i = x i ∂ i be the Euler operator with respect to x i , i = 1 , . . . , n . Thecommutation rules in K ( x )[ ∂ ] imply that δ i δ j = δ j δ i for all i, j ∈ { , . . . , n } . For u = ( u , . . . , u n ) ∈ N n , the symbol δ u stands for the product δ u · · · δ u n n .Recall that the m -th falling factorial [14, Section 3.1] of x i is( x i ) m = x i ( x i − · · · ( x i − m + 1) , where m ∈ N , i = 1 , . . . , n . Moreover, ( x i ) = 1. Proposition 5.1.
The following assertions hold for Euler operators:(i) For each m ∈ N and i ∈ { , . . . , n } , x mi ∂ mi = ( δ i ) m .(ii) For each p ∈ K [ x ] and x u ∈ T( x ) , we have p ( δ )( x u ) = p ( u ) x u .Proof. See [25, Section 5.2].Set K [ y ] = K [ y , . . . , y n ] to be the ring of usual commutative polynomialswith indeterminates y , . . . , y n . 13 efinition 5.2. Let a nonzero operator P ∈ K [ x ][ ∂ ] be of order m . Write x m P = X v ∈ S x v X | u |≤ m c u , v δ u , (9) where m = ( m, . . . , m ) ∈ N n and S is a finite subset of N n . Let x v be theminimal term among { x v | v ∈ S } with respect to ≺ such that P | u |≤ m c u , v δ u is nonzero. We call X | u |≤ m c u , v y u ∈ K [ y ] the indicial polynomial of P , and denote it by ind( P ) . We further define ind(0) := 0 . By Proposition 5.1 (i), we may always write x m P in the form (9). The abovedefinition is compatible with the univariate case [13, 22], and was already usedin the multivariate setting [3, Definition 11]. Proposition 5.3.
Let P be a nonzero element in K [ x ][ ∂ ] and f a formal powerseries solution of P with initial exponent w . Then w is a zero of ind( P ) .Proof. Assume that P is an operator of order m and write x m P = X v ∈ S x v X | u |≤ m c u , v δ u , as given in Definition 5.2. By Proposition 5.1 (ii), we have( x m P ) ( f ) = hP v ∈ S x v (cid:16)P | u |≤ m c u , v δ u (cid:17)i ( x w + higher monomials in x )= x v (cid:16)P | u |≤ m c u , v δ u (cid:17) ( x w ) + higher monomials in x = (cid:16)P | u |≤ m c u , v w u (cid:17) x v + w + higher monomials in x = 0Thus, P | u |≤ m c u , v w u = 0 , i.e. , ind( P )( w ) = 0. Example 5.4.
Consider the Gr¨obner basis G = { G , G } in Q ( x , x )[ ∂ , ∂ ] ,where G = x x ∂ − x x ∂ + ( x − x ) and G = x ∂ − x ∂ + (2 + x ) . Bycomputation, we find that ind( G ) = y − , ind( G ) = ( y − y − . It isstraightforward to verify that G has two formal power series solutions { f = x x sin( x + x ) , f = x x cos( x + x ) } , with in( f ) = x x and in( f ) = x x . The corresponding initial exponents { (2 , , (1 , } are two zeros of ind( G ) and ind( G ) . Definition 5.5.
Let I be a left ideal of K ( x )[ ∂ ] . We call { ind( P ) | P ∈ I ∩ K [ x ][ ∂ ] } the indicial ideal of I , and denote it by ind( I ) . heorem 5.6. Let I be a left ideal of K ( x )[ ∂ ] . Then ind( I ) is an ideal in K [ y ] .Moreover, ind( I ) is zero-dimensional if I is D-finite.Proof. For any two a, b ∈ ind( G ), there exist two operators P, Q ∈ I such that a = ind( P ) and b = ind( Q ). Let u and v be the respective orders of P and Q .Set u = ( u, . . . , u ) and v = ( v, . . . , v ). Expressing x u P and x v Q as polynomialsin x with coefficients in K [ δ ] placed on the right-hand side of the powers of x ,we have x u P = x s (cid:16)P | u |≤ u c u , s δ u (cid:17) + higher terms , x v Q = x t (cid:16)P | u |≤ v c u , t δ u (cid:17) + higher terms . Thus, a = P | u |≤ u c u , s y u and b = P | u |≤ v c u , t y u . Let L = x t ( x u P ) + x s ( x v Q ),which belongs to I . Then L = x s + t X | u |≤ u c u , s δ u + X | u |≤ v c u , t δ u + higher terms . Let m be the order of L and m = ( m, . . . , m ). Then x m L = x s + t + m X | u |≤ u c u , s δ u + X | u |≤ v c u , t δ u + higher terms . Thus, a + b = ind( L ).Assume that r ∈ K [ y ]. We want to prove that ra ∈ ind( G ). Since r is a sumof monomials in y , . . . , y n , it suffices to prove that ra ∈ ind( G ) for each term r .Assume that r = y w , where w = ( w , . . . , w n ) ∈ N n . Then x u P = x s X | u |≤ u c u , s δ u + higher terms , where s = ( s , . . . , s n ) ∈ N n . Let H = Q ni =1 ( δ i − s i ) w i x u P , which belongs to I ,and note that ( δ i − k ) x ki = x ki δ i for all i ∈ { , . . . , n } and k ∈ N . Then H = ( Q ni =1 ( δ i − s i ) w i ) x s (cid:16)P | u |≤ u c u , s δ u (cid:17) + higher terms= ( Q ni =1 ( δ i − s i ) w i x s i i ) (cid:16)P | u |≤ u c u , s δ u (cid:17) + higher terms= ( Q ni =1 x s i i δ w i i ) (cid:16)P | u |≤ u c u , s δ u (cid:17) + higher terms= x s (cid:16) δ w P | u |≤ u c u , s δ u (cid:17) + higher terms . Let ˜ m be the order of H and ˜ m = ( ˜ m, . . . , ˜ m ). Then x ˜ m H = x s + ˜ m δ w X | u |≤ u c u , s δ u + higher terms . Thus, ra = ind( H ). Consequently, ind( I ) is an ideal in K [ y ].15ssume further that I is D-finite. Then there exists a nonzero operator P oforder m such that P ∈ I ∩ K [ x ][ ∂ ] (see, e.g., [17, Proposition 2.10] for a proof).By Proposition 5.1 (i), we have x m P = x m ( c + c ∂ + · · · + c m ∂ m )= c x m + c x m − δ + · · · + c m ( δ ) m = P v ∈ T x v (cid:16)P k ≤ m c u , v δ k (cid:17) Thus, ind( P ) ∈ K [ y ] \ { } . In the same vein, ind( I ) ∩ K [ y i ] is nontrivial for all i with 2 ≤ i ≤ n . By [10, Theorem 6, page 251], ind( I ) is zero-dimensional.Given a Gr¨obner basis G of finite rank, the indicial ideal of K ( x )[ ∂ ] G is sim-ply denoted by ind( G ). By the last paragraph of the proof of Proposition 5.6, wecan construct a sub-ideal J of ind( G ) such that J is zero-dimensional. However,the above proposition does not necessarily give access to a basis of ind( G ). Definition 5.7.
Let G be a primitive Gr¨obner basis of finite rank. Assumethat J is a zero-dimensional ideal contained in ind( G ) . The set of nonnegativeinteger solutions of J is called a set of initial exponent candidates of G . By Proposition 5.3, the set of initial exponents of formal power series solu-tions of G must be contained in a set of initial exponent candidates of G . Sucha candidate set can be obtained by computing nonnegative integer solutions ofsome zero-dimensional system over K . Example 5.8.
Consider the Gr¨obner basis G = { G , G } from Example 5.4,where G = x x ∂ − x x ∂ + ( − x + x ) and G = x ∂ − x ∂ + (2 + x ) . By computation, we find that ind( G ) = y − , ind( G ) = ( y − y − . Bythe above definition, the set { (2 , , (1 , } is a set of initial exponent candidatesof G . Actually, (2 , and (1 , are initial exponents of the following formalpower series solutions x x sin( x + x ) and x x cos( x + x ) , respectively. The following example shows that initial candidates of G do not necessarilygive rise to formal power series solutions of G . Example 5.9.
Consider the Gr¨obner basis in Q ( x , x )[ ∂ , ∂ ] : G = { G , G } = { x x ∂ + ( − x + 2 x x ) ∂ − x , ( x − x x ) ∂ + 2 x x ∂ − x } By computation, we find that ind( G ) = y − y and ind( G ) = ( y − y . Thus,a set of initial exponent candidates of G is S = { (0 , , (1 , } Actually, sol E ( G ) is spanned by { x x − x , x x } . In this case, (1 , is the initialexponent of x x . However, (0 , is not the initial exponent of any formal powerseries solution of G . We present two algorithms: one is for removing apparent singularities, and theother is for detecting whether a singularity is apparent.16 lgorithm 5.10.
Given a primitive Gr¨obner basis G of finite rank such thatthe origin is an apparent singularity of G , compute a left multiple M of G suchthat the origin is an ordinary point of M .(1) Set d := rank( G ) .(2) Compute a set of initial exponent candidates S by Theorem 5.6.(3) For each B ⊂ S with | B | = d ,(3.1) set m := max u ∈ B | u | ;(3.2) compute a primitive Gr¨obner basis M B of the left ideal K ( x )[ ∂ ] G ∩ \ ( v ,...,v n ) ∈ N nm \ B K ( x )[ ∂ ] { x ∂ − v , . . . , x n ∂ n − v n } ; (3.3) if the origin is an ordinary point, then return M B . The above algorithm evidently terminates. We have that IE ( G ) ⊂ S and | IE ( G ) | = d , since the origin is an apparent singularity of G . It follows fromCorollary 4.7 that there exists a subset B of S with | B | = d such that the originis an ordinary point of M B . So the above algorithm is correct. Example 5.11.
Consider the Gr¨obner basis in Example 4.2: G = { x ∂ + ∂ − x − , ∂ − ∂ } , where sol E ( G ) is spanned by { exp( x + x ) , x exp( x ) } with initial exponents B = { (0 , , (0 , } . In this case, the origin is an apparent singularity of G .Note that N = { ( i, j ) ∈ N | i + j ≤ } = { (0 , , (1 , , (0 , } . Then N \ B = { (1 , } . Let M be another Gr¨obner basis with K ( x )[ ∂ ] M = K ( x )[ ∂ ] G ∩ K ( x )[ ∂ ] { x ∂ − , ∂ } . We find that
HC( M ) = { − x − x x } . It follows from Definition 3.1 that M is a left multiple of G for which the origin is an ordinary point. Example 5.12.
Consider the Gr¨obner basis from Example 4.3: G = { x ∂ − x ∂ + x − x , ∂ } , where sol E ( G ) is spanned by { x + x , x x } . In this case, the origin is anapparent singularity of G .A candidate set B of indicial exponents is B = { (1 , , (1 , } . Then N \ B = { (0 , , (0 , , (2 , , (0 , } . et M be another Gr¨obner basis with K ( x )[ ∂ ] M = K ( x )[ ∂ ] G ∩ \ ( s,t ) ∈ N \ B K ( x )[ ∂ ] { x ∂ − s, x ∂ − t } We find that M = { ∂ , ∂ ∂ , ∂ ∂ , ∂ } . The origin is an ordinary point of M . The next algorithm is a direct application of Algorithm 5.10.
Algorithm 5.13.
Given a primitive Gr¨obner basis G of finite rank such thatthe origin is a singularity of G , determine whether the origin is an apparentone, and return a left multiple M of G such that the origin is an ordinary pointof M when the origin is an apparent singularity.(1) Set d := rank( G ) .(2) Compute a set of initial exponent candidates S by Theorem 5.6. If | S | < d ,then return “the origin is not an apparent singularity”.(3) For each B ⊂ S with | B | = d ,(3.1) set m := max u ∈ B | u | ;(3.2) compute a primitive Gr¨obner basis M B of the left ideal K ( x )[ ∂ ] G ∩ \ ( v ,...,v n ) ∈ N nm \ B K ( x )[ ∂ ] { x ∂ − v , . . . , x n ∂ n − v n } ; (3.3) if the origin is an ordinary point, then return M B .(4) Return “the origin is not an apparent singularity”. The above algorithm clearly terminates. If the candidate set S has less than d many elements, then the solution space of G in E cannot be spanned by formalpower series. Thus, the origin is not an apparent singularity. The rest of theabove algorithm is correct by Algorithm 5.10. Example 5.14.
Consider the Gr¨obner basis from Example 5.9: G = { G , G } = { x x ∂ + ( − x + 2 x x ) ∂ − x , ( x − x x ) ∂ + 2 x x ∂ − x } . Then rank( G ) = 2 and the origin is a singularity of G . By computation, wefind that ind( G ) = y − y and ind( G ) = ( y − y . Thus, a set of initialexponent candidates of G is S = { (0 , , (1 , } Let B = S . Then N \ B = { (1 , , (0 , , (2 , , (0 , } . Let M be anotherGr¨obner basis with K ( x )[ ∂ ] M = K ( x )[ ∂ ] G ∩ \ ( s,t ) ∈ U \ B K ( x )[ ∂ ] { x ∂ − s, x ∂ − t } . e find that HC( M ) = { x − x x + 3 x x − x x , − x + 3 x x − x x + x } . Thus, the origin is a singularity of M . By Theorem 4.6, we conclude that theorigin is not an apparent singularity of G . Actually, sol E ( G ) is spanned by (cid:26) x x − x , x x (cid:27) . Example 5.15.
Consider the Gr¨obner basis in Q ( x , x )[ ∂ , ∂ ] : G = { G , G , G } = { ( x − x ) ∂ − x x ∂ + x x ∂ + ( x − x ) , ( x − x ) ∂ ∂ + ( − − x x ) ∂ + (1 + x x ) ∂ + ( x − x ) , ( x − x ) ∂ − x x ∂ + x x ∂ + ( x − x ) } . Then rank( G ) = 3 and the origin is a singularity of G . From the three indcialpolynomials ind( G ) = ( y − y , ind( G ) = y ( y − and ind( G ) = ( y − y ,we find that a set of initial exponent candidates of G is S = { (0 , , (1 , , (1 , } . Let B = S . Then N \ B = { (0 , , (2 , , (0 , } . Let M be another Gr¨obnerbasis with K ( x )[ ∂ ] M = K ( x )[ ∂ ] G ∩ \ ( s,t ) ∈ N \ B K ( x )[ ∂ ] { x ∂ − s, x ∂ − t } . We find that the origin is an ordinary point of M . By Theorem 4.6, we concludethat the origin is an apparent singularity of G . Actually, sol E ( G ) is spanned by { sin( x + x ) , cos( x + x ) , x x } . For a nonzero operator L ∈ K [ x ][ ∂ ] with apparent singularities, the random-ized algorithm in [7] computes a desingularized operator for L by taking theleast common left multiple of the operator L with a random operator of ap-propriate order with constant coefficients. This algorithm has been proved toobtain a correct desingularized operator for L with probability one, and is moreefficient than deterministic algorithms. We now extend this randomized tech-nique to the case of several variables. To this end, we need two lemmas aboutdeterminants. Lemma 5.16.
Let A = ( a ij ) be a full rank ( k + d ) × d matrix over K and ( y ij ) be a ( k + d ) × k matrix whose entries are distinct indeterminates. Then thedeterminant ∆ = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) a , · · · a ,d y · · · y k a , · · · a ,d y · · · y k ... . . . ... ... . . . ... a k + d, · · · a k + d,d y k + d, · · · y k + d,k (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) s equal to a nonzero polynomial of the form X ( i ,...,i k ) ∈ S α i ,...,i k y i , · · · y i k ,k , (10) where S is a nonempty subset of N kk + d , and α i ,...,i k ∈ K is nonzero for every ( i , . . . , i k ) ∈ S .Proof. Since A is of full rank, there exists a d × d nonzero minor in A . Withoutloss of generality, we may assume that the minor consists of the first d rows andthe first d columns. Setting y ij = 0 for all i, j with 1 ≤ i ≤ d and 1 ≤ j ≤ k , wetransform the determinant ∆ to (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) a , · · · a ,d · · · a d, · · · a d,d · · · a d +1 , · · · a d +1 ,d y d +1 , · · · y d +1 ,k ... . . . ... ... . . . ... a d + k, · · · a d + k,d y d + k, · · · y d + k,k (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) , which is nonzero. So ∆ is nonzero. Collecting the like terms of the determinant.we see that ∆ is of the form (10). Lemma 5.17.
Let A = ( a ij ) be a full rank ( d + k ) × d matrix over K , Z , . . . , Z k be mutually disjoint sets of indeterminates. Let b ,j , . . . , b d + k,j be distinct termsin the indeterminates belonging to Z j , j = 1 , . . . , k . Then the determinant D = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) a , · · · a ,d b , · · · b ,k a , · · · a ,d b , · · · b ,k ... . . . ... ... . . . ... a d + k, · · · a d + k,d b d + k, · · · b d + k,k (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) is a nonzero polynomial in K [ Z ∪ · · · ∪ Z n ] .Proof. By (10), D = X ( i ,...,i n ) ∈ W α i ,...,i n b i , · · · b i n ,n . (11)For two distinct elements ( i , . . . , i n ) , ( j , . . . , j n ) ∈ W , the two terms b i , · · · b i n ,n and b j , · · · b j n ,n are also distinct by the definition of b ij ’s. Hence, there are nolike terms to be collected in the right-hand side of (11), which implies that D is nonzero. Theorem 5.18.
Let G be a primitive Gr¨obner basis of rank d . Assume that theorigin is an apparent singularity of G , and f , . . . , f d be K -linearly independentformal power series solutions of G with distinct initial exponents u , . . . , u d ,respectively. Set m = max ≤ i ≤ d | u i | and N nm \ IE ( G ) = { u d +1 , . . . , u ℓ } . Foreach j ∈ { , . . . , ℓ − d } , let f d + j be the formal power series expansion of exp ( z ,j x + · · · + z n,j x n ) at the origin, where z j , . . . , z nj are distinct constant indeterminates. Let A bethe ℓ × ℓ matrix whose element at the i th row and j th column is the formal powerseries ∂ u i f j evaluated at the origin, i, j ∈ { , . . . , ℓ } . Then i) det( A ) is a nonzero polynomial in K [ z , , . . . , z n, , . . . , z ,ℓ − d , . . . , z n,ℓ − d ] . (ii) For ≤ i ≤ n and ≤ j ≤ ℓ − d , let c i,j be elements of K . If det( A ) does not vanish at ( c , , . . . , c n, , . . . , c ,ℓ − d , . . . , c n,ℓ − d ) , then the origin isan ordinary point of the primitive Gr¨obner basis M of K ( x )[ ∂ ] G ∩ ℓ − d \ j =1 K ( x )[ ∂ ] { ∂ − c ,j , . . . , ∂ n − c n,j } . Proof.
We need two ring homomorphisms in the proof.Let R = K [ z , , . . . , z n, , . . . , z ,ℓ − d , . . . , z n,ℓ − d ] . We define φ to be the homo-morphism from R [[ x ]] to R that takes the constant term of a formal power seriesin x , which extends the homomorphism from K [[ x ]] to K defined in Section 2.3.For every nonzero formal power series f ∈ R with initial exponents v , we have ∀ w ∈ N n with w ≺ v , φ ( ∂ w ( f )) = 0 and φ ( ∂ v ( f )) = 0 . (12)by (2). Let ψ : R −→ K be the substitution that maps z ij to c ij for every i ∈ { , . . . , n } and j ∈ { , . . . , ℓ − d } . Then ψ is a ring homomorphism. Weextend ψ to a homomorphism from R [[ x ]] to K [[ x ]] by the rule ψ ( x i ) = x i . i = 1 , . . . , n . The extended homomorphism is also denoted by ψ .(i) Without loss of generality, we order the initial exponents u , . . . , u d in-creasingly with respect to ≺ . Then the submatrix consisting of the first d rowsand first d columns in A is in a lower triangular form whose elements in thediagonal are all nonzero by (12). Thus, the first d columns of A are linearlyindependent over K . Let z j = ( z ,j , . . . , z n,j ), j = 1 , . . . , ℓ − d . Then the( d + j )th column of A consists of z u j , . . . , z u ℓ j , which are distinct terms in z j .Thus, det( A ) is nonzero by Lemma 5.17.(ii) Let g j = ψ ( f j ) for j = 1 , . . . , ℓ . Then the element at the i th row and j th column of ψ ( A ) is the image of φ ( ∂ u i ( g j )) for all i, j ∈ { , . . . , ℓ } because ψ ◦ ∂ u i = ∂ u i ◦ ψ . Moreover, det( ψ ( A )) is nonzero because det( A ) does notvanish at ( c , , . . . , c n, , . . . , c ,ℓ − d , . . . , c n,ℓ − d ). Let B be the ℓ × ℓ matrix whoseelement at the i th row and j th column is equal to ∂ u i g j for all i, j ∈ { , . . . , ℓ } .Then φ ( B ) = ψ ( A ). Thus, det( φ ( B )) is nonzero, and so is det( B ). It followsthat g , . . . , g ℓ are linearly independent over K by Theorem 1 in [16, ChapterII] (see also Section 2.4).Set I = K ( x )[ ∂ ] G and I j = K ( x )[ ∂ ] { ∂ − c ,j , . . . , ∂ n − c n,j } for all j in { , . . . , ℓ − d } . Then g , . . . , g d form a basis of sol E ( I ), because g i = f i , i = 1 , . . . , d , and g d + j spans sol E ( I j ), because g d + j is the formal power seriesexpansion of exp ( c ,j x + · · · + c n,j x n ) at the origin for all j ∈ { , . . . , ℓ − d } .It follows from Lemma 4.4 (iii) that g , . . . , g ℓ form a basis of sol E ( M ).To prove that the origin is an ordinary point of M , it suffices to find a basisof sol E ( M ) in K [[ x ]] whose initial exponents are exactly the elements of N nm byLemma 4.5. Since g , . . . , g ℓ are linearly independent over K , there exists an ℓ × ℓ matrix C over K such that( h , . . . , h ℓ ) = ( g , . . . , g ℓ ) C, in which h , . . . , h ℓ have distinct initial exponents.Set H = BC . Then H is the ℓ × ℓ matrix whose element at the i th rowand j th column is equal to ∂ u i h j . Moreover, φ ( H ) is of full rank since φ ( H )21s equal to φ ( B ) C . Suppose that there exists j ∈ { , . . . , ℓ } such that its initialexponent does not belong to N nm . Then it is higher than any element in N nm ,because ≺ is graded. It follows from (12) that the j th column of φ ( H ) is a zerovector by (12), a contradiction. Therefore, the initial exponents of h , . . . , h ℓ are exactly the elements of N nm . Algorithm 5.19.
Given a primitive Gr¨obner basis G of finite rank such thatthe origin is an apparent singularity of G , compute a left multiple M of G suchthat the origin is an ordinary point of M or return “fail”.(1) Set d := rank( G ) .(2) Compute a set of initial exponent candidates S by Theorem 5.6.(3) For each B ⊂ S with | B | = d ,(3.1) set m := max u ∈ B | u | and ℓ := | N nm | ;(3.2) choose a point c = ( c , , . . . , c n, , . . . , c ,ℓ − d , . . . , c n,ℓ − d ) ∈ K n ( ℓ − d ) ;(3.3) compute the primitive Gr¨obner basis M B of the left ideal K ( x )[ ∂ ] G ∩ ℓ − d \ j =1 K ( x )[ ∂ ] { ∂ − c ,j , . . . , ∂ n − c n,j } ; (3.4) if the origin is an ordinary point, then return M B .(4) return “fail”. The above algorithm clearly terminates. If B = IE ( G ) and det( A ) given inTheorem 5.18 does not vanish at c , then the origin is an ordinary point of M B by Theorem 5.18. So it does not return “fail” unless c lies on the variety definedby det( A ) = 0. In this sense, we say that the above algorithm succeeds withprobability one. An advantage of the above algorithm is that it is more efficientto compute a Gr¨obner basis of the intersection of several left ideals, most ofwhich are generated by first-order operators with constant coefficients. Anotheradvantage is that this algorithm is likely to remove all apparent singularity,not just the origin, because almost all choices of c i,j will also work for apparentsingularities at almost any other point. On the other hand, it is not convenient toapply Theorem 5.18 to determine whether the origin is an apparent singularity,because the above algorithm will always return “fail” if the origin is a singularitybut not an apparent one. Example 5.20.
Consider the Gr¨obner basis in Example 4.2: G = { G , G } = { x ∂ + ∂ − x − , ∂ − ∂ } . In this case, n = 2 and d := rank( G ) = 2 and the origin is an apparent singu-larity of G . Set P = ( x ∂ − x ∂ − x ) G + x G = ∂ − ∂ + 1 . e find that ind( P ) = y ( y − , ind( G ) = y and ind( G ) = y ( y − . Thus,a set of initial exponent candidates of G is S = { (0 , , (0 , } . Set B = S and ℓ = | N | = 3 . Choose c = (19 , ∈ K n ( ℓ − d ) = K . Let M B be the primitiveGr¨obner basis of the left ideal K ( x )[ ∂ ] G ∩ K ( x )[ ∂ ] { ∂ − , ∂ − } . We find that
HC( M B ) = { x } . It follows from Definition 3.1 that M B isa left multiple of G for which the origin is an ordinary point. In the above example, if we take the roots of { ind( P ) , ind( G ) } as a setof initial exponent candidates of G , then it strictly contains the set of initialexponents of G . Let G be a primitive Gr¨obner basis. Assume that the origin is an apparentsingularity of G . Then sol E ( G ) has a basis of formal power series. In thissubsection, we present an algorithm for computing such a basis truncated at agiven total degree.Consider a fixed series f = P u ∈ N n c u u ! x u ∈ K [[ x ]]. For each m ∈ N , set( f ) m = X | u |≤ m c u u ! x u ∈ K [ x ] . We call ( f ) m the m -th truncated power series [14, page 35] of f . As a matter ofnotation, we write T m ⊂ K [ x ] for the ideal generated by { x u | | u | = m } .Our idea is based on the proof of the necessity in Theorem 4.6. Assume thatrank( G ) = d . We desingularize the origin by a left multiple M of G , and thencompute a power series basis f , . . . , f ℓ of sol E ( M ). Every power series solutionof G is a linear combination of f , . . . , f ℓ over K because sol E ( G ) ⊂ sol E ( M ).More precisely, every solution of the system A z = in (8) gives rise to a powerseries solution of G and vise versa. Assume that the rows of A are labelledby the elements of N n increasingly with respect to ≺ . And let A m be thematrix consisting of all rows labelled by the elements whose total degree arenot higher than m . Then there exists m ∈ N such that A m has a right kernelof dimension d , because the solution space of A z = in (8) is of dimension d over K . So we just need to find the matrix A m incrementally using ( f ) m + r , . . . ,( f ℓ ) m + r until rank( A m ) = ℓ − d , and then to compute a basis of the right kernelof A m , where r is the maximal order of elements in G . This idea is encoded inthe following algorithm. Algorithm 5.21.
Given m ∈ N , and a primitive Gr¨obner basis G of rank d suchthat the origin is an apparent singularity, compute polynomials p , . . . , p d ∈ K [ x ] such that there exist K -linearly independent power series solutions g , . . . , g d of G with the property p = ( g ) m , . . . , p d = ( g d ) m . (1) [Desingularize] By Algorithm 5.10, compute a left multiple M of G suchthat the origin is an ordinary point of M . And set ℓ := rank( M ) .
2) [Initialize] Let r be the maximal order of elements in G . Set s = m and z , . . . , z ℓ to be constant indeterminates,(3) [Construct a matrix of the maximal rank incrementally] Repeat(3.1) Set s = s + 1 .(3.2) By formula (3) , compute a truncated power series in the form X u ∈ N n c u u ! x u ! s + r , (13) in which c u is an arbitrary constant for every u ∈ PE( M ) .(3.3) For each u ∈ PE( M ) , specialize c u = 1 and c u ′ = 0 with u ′ = u in (13) to obtain polynomials h , . . . , h ℓ .(3.4) Construct a matrix A s with ℓ columns over K such that its rightkernel is equal to the solution space of the linear system G t ( z h + · · · + z ℓ h ℓ ) ≡ T s +1 , t = 1 , . . . , k. Until rank( A s ) = ℓ − d .(4) [Compute truncated solutions] Find a basis ( c , , . . . , c ℓ, ) t , . . . , ( c ,d , . . . , c ℓ,d ) t for the right kernel of A s and set p j = ℓ X i =1 c i,j h j ! m , j = 1 , . . . , d. (5) Return p , . . . , p d . Example 5.22.
Consider the Gr¨obner basis in Example 4.2: G = { G , G } = { x ∂ + ∂ − x − , ∂ − ∂ } . In this case, rank( G ) = 2 and the origin is an apparent singularity of G . Wecompute 2nd order truncated power series of a basis of sol E ( G ) .(1) Let M be the primitive Gr¨obner basis of the left ideal K ( x )[ ∂ ] G ∩ K ( x )[ ∂ ] { x ∂ − , ∂ } . We find that the origin is an ordinary point of M and rank( M ) = 3 .(2) Let r = 2 be the maximal order of elements in G. Set s = 2 and z , z , z to be constant indeterminates.(3) By formula (3) , we obtain the following -th truncated power series of abasis of sol E ( M ) : h = (exp( x + x ) − x − x exp( x )) ,h = x ,h = ( x exp( x )) . e find a matrix A ∈ K × of rank such that its right kernel is equal to thesolution space of the linear system G t ( z h + z h + z h ) ≡ T , t = 1 , . (4) We find that the right kernel of A has a basis (1 , , t , (0 , , t , and set p = ( h + h ) = (exp( x + x ) − x exp( x )) ,p = ( h ) = ( x exp( x )) . (5) Return p , p .Actually, sol E ( G ) has a basis { exp( x + x ) , x exp( x ) } . So, p , p are indeed2nd order truncated power series of a basis of sol E ( G ) . References [1] S. A. Abramov, M. Barkatou, and M. van Hoeij. Apparent singularities oflinear difference equations with polynomial coefficients.
AAECC , 117–133,2006.[2] S. A. Abramov and M. van Hoeij. Desingularization of linear differenceoperators with polynomial coefficients. In
Proc. of ISSAC’99 , 269–275,New York, NY, USA, 1999, ACM.[3] F. Aroca and J. Cano. Formal solutions of linear PDEs and convex poly-hedra.
J. Symb. Comput. , 32:717–737, 2001.[4] M. A. Barkatou and S. S. Maddah. Removing apparent singularities ofsystems of linear differential equations with rational function coefficients.In
Proc. of ISSAC’15 , 53–60, New York, NY, USA, 2015, ACM.[5] T. Becker and V. Weispfenning.
Gr¨obner Bases, a computational approachto commutative algebra . Springer-Verlag, New York, USA, 1993.[6] M. Bronstein, Z. Li and M. Wu. Picard-Vessiot extensions for linear func-tional systems. In
Proc. of ISSAC’05 , 68–75, New York, NY, USA, 2005,ACM.[7] S. Chen, M. Kauers, and M. F. Singer. Desingularization of Ore operators.
J. Symb. Comput. , 74:617–626, 2016.[8] F. Chyzak. Mgfun Project. http://algo.inria.fr/chyzak/mgfun.html.[9] F. Chyzak and B. Salvy. Non-commutative elimination in Ore algebrasproves multivariate identities.
J. Symb. Comput. , 26:187–227, 1998.[10] D. Cox, J. Little and D. O’Shea.
Ideals, Varieties, and Algorithms.
Springer, 2015.[11] I. Gessel. Two theorems on rational power series. Utilitas Mathematica,19:247–254, 1981.[12] E. Ince.
Ordinary Differential Equations.
Dover, 1926.2513] M. Jaroschek.
Removable singularities of Ore operators . PhD thesis, RISC-Linz, Johannes Kepler Univ., 2013.[14] M. Kauers and P. Paule.
The Concrete Tetrahedron.
Springer, 2011.[15] A. Kandri-Rody and V. Weispfenning. Non-commutative Gr¨obner bases inalgebras of solvable type.
J. Symb. Comput. , 9:1–26, 1990.[16] E. Kolchin.
Differential Algebra and Algebraic Groups.
Academic Press.,New York, 1973.[17] C. Koutschan.
Advanced Applications of the Holonomic Systems Approach .PhD thesis, RISC-Linz, Johannes Kepler Univ., 2009.[18] C. Koutschan.
HolonomicFunctions User’s Guide . RISC Report Series,Johannes Kepler Univ., 2010.[19] Z. Li, F. Schwarz and S. Tsarev. Factoring zero-dimensional ideals of linearpartial differential operators. In
Proc. of ISSAC’02 , 168–175, New York,NY, USA, 2002, ACM.[20] M. van der Put and M. Singer
Galois Theory of Linear Differential Equa-tions.
Springer, 2003.[21] D. Robertz.
Formal Algorithmic Elimination for PDEs . Springer, 2014.[22] M. Saito, B. Sturmfels, and N. Takayama.
Gr¨obner deformations of hyper-geometric differential equations.
Springer-Verlag, New York, USA, 1999.[23] H. Tsai.
Algorithms for algebraic analysis.
PhD thesis, University of Cali-fornia at Berkeley, 2000.[24] W.T. Wu. On the foundation of algebraic differential geometry.
Journal ofSystems Science and Complexity