A New Deflation Method For Verifying the Isolated Singular Zeros of Polynomial Systems
aa r X i v : . [ c s . S C ] D ec A New Deflation Method For Verifying the IsolatedSingular Zeros of Polynomial Systems
Jin-San Cheng b,c , Xiaojie Dou a , Junyi Wen b,ca College of Science, Civil Aviation University of China, Tianjin b KLMM, Academy of Mathematics and Systems ScienceChinese Academy of Sciences, Beijing c University of Chinese Academy of Sciences, [email protected] [email protected] [email protected] 1, 2019
Abstract
In this paper, we develop a new deflation technique for refining or verifyingthe isolated singular zeros of polynomial systems. Starting from a polynomialsystem with an isolated singular zero, by computing the derivatives of the in-put polynomials directly or the linear combinations of the related polynomials,we construct a new system, which can be used to refine or verify the isolatedsingular zero of the input system. In order to preserve the accuracy in numeri-cal computation as much as possible, new variables are introduced to representthe coefficients of the linear combinations of the related polynomials. To ourknowledge, it is the first time that considering the deflation problem of polyno-mial systems from the perspective of the linear combination. Some accelerationstrategies are proposed to reduce the scale of the final system. We also give somefurther analysis of the tolerances we use, which can help us have a better un-derstanding of our method. The experiments show that our method is effectiveand efficient. Especially, it works well for zeros with high multiplicities of largesystems. It also works for isolated singular zeros of non-polynomial systems.
Key words.
Polynomial system, deflation method, isolated singular zero, inter-val verification
Solving polynomial systems with singular zeros is always a challenge in algebraicand geometric computation. For an isolated simple zero of a polynomial system, theclassical Newton’s method is widely used and quadratic convergent. However, forsingular zeros of a polynomial system, Newton’s method is not fit for the original systemdirectly because it converges slowly or even doesn’t converge in a bad situation. What’s1ore, it is an ill-posed problem to compute an isolated singular zero of a polynomialsystem or a nonlinear system, since a small perturbation of coefficients may transforman isolated singular zero into a cluster of simple zeros.Therefore, finding methods to keep the quadratic convergence of Newton’s methodfor singular zeros is a way to handle this problem. Given a polynomial system with anisolated singular zero, we can construct a new system owing the same singular zero asan isolated simple one. Based on this idea, in recent years, there are many symbolic orsymbolic-numerical methods coming up to deal with this problem. The basic idea isthe deflation techniques [1, 3, 4, 6, 7, 8, 10, 25], which usually have two basic strategies:adding new equations only or both new equations and new variables to the originalsystem.Deflation for an isolated singular solution originated from the ideas of Ojika [19,20, 21]. T. Ojika et al. present a deflation algorithm for determining the multiple zerosfor a system of nonlinear equations. Through triangulating the Jacobian matrix of theoriginal system at an approximate zero, new equations, which comes from the minorsof the Jacobian matrix, are introduced to the original system to reduce the multiplicityuntil they get a system which is regular at the singular zero.In [5], Giusti and Yakoubsohn propose a construction, which is based on two op-erations: deflating and kerneling, to determine a regular system without adding newvariables. In the deflating, all the partial derivatives of the polynomials, which are zeroat the multiple zero, are introduced to replace the corresponding polynomials. The ker-neling operation consists of adding the polynomials given by the nonzero numerators ofthe coefficients of the Schur complement of the Jacobian matrix of the original systemto the original system.In [8], Hauenstein and Wampler define a strong deflation by only adding new equa-tions coming from the one order differential of the Jacobian matrix of the originalsystem to the original system. Different from [5], at each iteration step, both thenumber and the degree of the added equations are reduced.In [9], Mourrain et al. give a method which uses a single linear differential formdefined from the Jacobian matrix of the input system, and defines the deflated systemby applying this differential form to the original system.These above methods do introduce new equations and finally get a new systemowing the isolated singular zero of the original system as a simple zero. In order toget the new polynomials, one needs to compute the determinant of some polynomialmatrices. Thus the degree of the polynomials in the new system may be very high.In the following, denote n as the number of both the variables and the equationsin the original system, and µ as the multiplicity of the isolated singular zero of theoriginal system.In [18], Yamamoto introduces new equations and new variables to the originalsystem simultaneously. New variables are used to bring some perturbations of theoriginal system and the Jacobian matrix of the original system, which produce newequations.In [12, 13, 24], Leykin et al. present an effective modification of Newton’s methodto restore quadratic convergence for isolated singular solutions of polynomial systems.Different from [18], new variables are only introduced to the Jacobian matrix of the2riginal system, which produce new equations. Meanwhile, they also prove that thenumber of deflation stages is bounded by µ .In [2], Dayton and Zeng modify the method in [12] and further prove that thenumber of deflation steps is bounded by the depth of the dual space. For the specialcase of breathe one, they also propose a modified deflation method, which is based onduality analysis, to reduce the final size 2 µ − n × µ − n of deflated system in [12] to µn × µn .In [23], by introducing a smoothing parameter to the original system and n − µn × µn .In [17], based on the given multiplicity structure of the original system, whichdepends on the accuracy of the given approximate multiple zero, Mantzaflaris andMourrain give a method to find a (small) perturbed system of the original system andthen first compute a deflated system in one deflation step. The size of the final deflatedsystem is equal to µn × µn .In [16], by lifting the independent perturbations in the first-order differential systemappearing in [18] back to the original system, Li and Zhi modify the method in [18]and also prove that the modified deflation technique terminates after a finite number ofsteps bounded by the depth of the dual space. The size of the final modified regularizedsystem is bounded by 2 µ − n × µ − n .In [9], by introducing some variables to represent the coefficients of the dual basis,Mourrain et al. give a method to deflate the original system and determine the mul-tiplicity structure simultaneously. They also show that the number of variables andequations in this method is bounded by n + nµ ( µ − / nµ + n ( n − µ − µ − / Main contribution.
In this paper, given a polynomial system F ⊂ C [ x ] with anisolated singular zero p , by computing the derivatives of the input polynomials directlyor the linear combinations of the related polynomials, we propose a new deflationmethod to construct a final deflated system e F ′ ( x , α ), which has an isolated simple zero( p , ˆ α ), whose projection corresponds to the isolated singular zero p of the input system.New variables α are introduced to represent the coefficients of the linear combinationsof the related polynomials to ensure the accuracy of the numerical implementation.Moreover, we also prove that the size of our deflation system depends on the depth orthe multiplicity of p .Compared to the previous methods, our method has the following differences:1. For the input system F , we can, if needed, compute the derivatives of every f i toget the needed polynomials, which are regular at p at the beginning. Then, we3ut all these polynomials together to construct a system F , such that the rank r of its Jacobian matrix at p is maximal. In some cases, we have r = n , whichmeans that we need not introduce new variables.2. We compute the derivatives of the linear combinations of the related polynomialsto get some polynomials which are regular at p . Here we introduce new variablesto represent the coefficients of the linear combination.3. Considering that we know only the approximate zero ˜ p in actual computations,we use a tolerance θ to judge if a polynomial is θ -regular or θ -singular at ˜ p andanother tolerance ε to judge the numerical rank of the Jacobian matrix. Aslong as the tolerance θ is chosen properly, we will get the same judgement innumerical case as in the exact case. Thus, our deflation system usually has thesame isolated zero as the input system. Inspired by the work [12] of Leykin etal., we also give some further analysis on the tolerances θ and ε , which tells usthat our final system is a perturbed system with a bounded perturbation in theworst case. To make our final system as accurate as possible, we also analyse thecase that the tolerance θ is not introduced.Thanks to the above acceleration strategies, the size of the final system in our actualcomputations is much less than that we give in theory. Furthermore, we implement ourmethod in Matlab. The experiments show that our method is effective and efficient,especially for large systems with singular zeros of high multiplicities. Besides, for thenon-polynomial systems, our method is also applicable.The paper is organized as below. We introduce some notations and preliminariesin the next section. In Section 3, we give a new deflation idea to construct a deflatedsquare system from the input system with an isolated singular zero. An effective versionof our method is given in Section 4. Some numerical experiment results are given todemonstrate the performance of our algorithm in Section 5 and at last, we draw aconclusion in Section 6. Let C be the complex field and C [ x ] = C [ x , . . . , x n ] be the polynomial ring. Denote F = { f , f , . . . , f n } ⊂ C [ x ] as a polynomial system and deg( f i ) as the degree of thepolynomial f i . Similarly, deg( F ) = max f i ∈ F deg( f i ). Let p = ( p , . . . , p n ) ∈ C n . F ( p ) = denotes that p is a zero of F ( x ) = .Let V ( F ) ⊂ C n denote the variety defined by F and dim V ( F ) denote the dimensionof V ( F ).Let d γ x : C [ x ] → C [ x ] denote the differential functional defined by d γ x ( f ) = 1 γ ! · · · γ n ! · ∂ | γ | f∂x γ · · · ∂x γ n n , ∀ f ∈ C [ x ] , where γ = ( γ , . . . , γ n ) ∈ N n with N = { , , , . . . } and | γ | = n P i =1 γ i .4enote rank( A ) as the rank of a matrix A . Denote J ( F ) as the Jacobian matrix of F . That is, J ( F ) = ∂f ∂x . . . ∂f ∂x n ... . . . ... ∂f n ∂x . . . ∂f n ∂x n . For a polynomial f ∈ C [ x ], let J ( f ) denote ( ∂f∂x , ∂f∂x , . . . , ∂f∂x n ) and J i ( f ) = ∂f∂x i . Let J ( F )( p ) denote the value of a function matrix J ( F ) at a point p , similarly for J ( f )( p ). Definition 1. An isolated zero of F ( x ) = is a point p ∈ C n which satisfies: ∃ ε > { y ∈ C n : k y − p k < ε } ∩ F − ( ) = { p } , where F − ( ) , { p ∈ C n : F ( p ) = } . Definition 2.
We call an isolated zero p ∈ C n of F ( x ) = an isolated singularzero if and only if rank( J ( F )( p )) < n. Otherwise, p is an isolated regular( simple) zero of F ( x ) = . The
Taylor series expansion (Taylor expansion for short) of f ∈ C [ x ] at p =( p , . . . , p n ) ∈ C n is f ( x ) = f ( p ) + n X j =1 ∂f ( p ) ∂x j ( x j − p j ) + X ≤ i,j ≤ n ∂ f ( p ) ∂x i ∂x j ( x i − p i )( x j − p j ) + . . . . (1) Definition 3.
Let p ∈ C n and f ( p ) = 0 . We say f ∈ C [ x ] is singular at p if ∂f ( p ) ∂x j = 0 , ∀ ≤ j ≤ n. Otherwise, we say f is regular at p . Definition 4.
Let f ∈ C [ x ] , ˜ p ∈ C n and a tolerance θ > , s.t. | f (˜ p ) | < θ . We say f is θ -singular at ˜ p if (cid:12)(cid:12)(cid:12)(cid:12) ∂f (˜ p ) ∂x j (cid:12)(cid:12)(cid:12)(cid:12) < θ, ∀ ≤ j ≤ n. Otherwise, we say f is θ -regular at ˜ p . Lemma 5.
Let f ∈ C [ x ] \ C , s.t. f ( p ) = 0 . Then there exists at least a γ ∈ N n , s.t. d γ x ( f ) is regular at p .Proof. Without loss of generality, we assume p = . Then f can be rewritten as a sumof homogeneous polynomials as f = deg ( f ) X d =1 f d . Since f
0, there exists at least a γ ′ ∈ N n such that d γ ′ x ( f )( p ) = 0. Thus there existsat least a γ ∈ N n such that d γ x ( f ) is regular at p .5ow, we give an example to explain this and illustrate Definitions 3, 4 and Lemma5. Example 1.
Let f = x + 3 x + 4 x − x + x − x − x . For the exact point p = (0 , , , , we have the Taylor expansion of f at p is: f ( x ) = x + 3 x + 4 x − x + x − x − x . Since | f ( p ) | = 0 , | J ( f )( p ) | = 0 , | J i ( f )( p ) | 6 = 0 , i = 1 , , , we know that f is regular at p . So is d (0 , , , x ( f ) = − x .Similarly, for the approximate point ˜ p = (0 . , − . , . , − . and a tol-erance θ = 0 . , we have: f ( x ) =0 . . x − . − · − ( x + 0 . . x − . . x + 0 . − ( x − . + 3 · − ( x + 0 . + ( x − . − ( x + 0 . − ( x + 0 . . %beginequation Since | f (˜ p ) | = 0 . < θ , | ∂f∂x (˜ p ) | = 3 · − < θ , | ∂f∂x (˜ p ) | =0 . > θ , | ∂f∂x (˜ p ) | = 3 . > θ , | ∂f∂x (˜ p ) | = 4 . > θ , thus f is θ -regular at ˜ p . From this example, it’s easy to see that when compared with the exact case, the ap-proximate zero ˜ p brings a small perturbation in the coefficients of the Taylor expansionof f at ˜ p . However, once given a proper θ , we could acquire the same judging result asthe exact case. For the above example, f is regular at p and it is also θ -regular at ˜ p . Definition 6.
Denote the operation set ∆ , { + , · , ∂ } , where “ + ” denotes the sumof two polynomials, “ · ” denotes scalar multiplication and “ ∂ ” the differential of apolynomial. Given a polynomial system F = { f , . . . , f n } ⊂ C [ x ] and p ∈ C n such that F ( p ) = , we define a polynomial set ∆ p ( F ) , which satisfies: (1) F ⊂ ∆ p ( F ) ; (2) { a h | h ∈ ∆ p ( F ) , a ∈ C \{ }} ⊂ ∆ p ( F ) ; (3) { h + h | h ( p ) + h ( p ) = 0 , h , h ∈ ∆ p ( F ) } ⊂ ∆ p ( F ) ; (4) { ∂h∂x i | ∂h∂x i ( p ) = 0 , i ∈ { , . . . , n } , h ∈ ∆ p ( F ) } ⊂ ∆ p ( F ) .Especially, for one polynomial f ∈ C [ x ] , we have the corresponding set ∆ p ( f ) . The following lemma shows the relationship between the polynomials in ∆ p ( F ) andthe polynomials in F . Lemma 7.
Let F = { f , . . . , f n } ⊂ C [ x ] and p ∈ C n , s.t. F ( p ) = . ∀ g ∈ ∆ p ( F ) , wehave g = n X i =1 X j a i,j ∂ | γ i,j | f i ∂ x γ i,j , (2) where a i,j ∈ C and γ i,j ∈ N n . roof. The proof is obvious.We illustrate Definition 6 and Lemma 7 by the following example.
Example 2.
Let F = { f = ( x + y ) + x , f = x + y + y } . p = (0 , is anisolated zero of F = 0 . Let h = ∂f ∂x = 2 ( x + y ) + 3 x , h = ∂f ∂y = 2 ( x + y ) , h = h − f = 3 x − y , h = ∂h ∂x = 6 x , h = ∂ h ∂y = − y . It is clear that h i ∈ ∆ p ( F ) , i = 1 , . . . , and h i has the form as (2). Given a polynomial system with a multiple zero, Newton-type method usually isnot used directly on the input system since it converges slowly or even doesn’t converge.Thus, deflation techniques are developed to transform the input system into anotherdeflated system, which is regular at some zero whose certain projection is the givenmultiple zero.In this section, given a polynomial system F ⊂ C [ x ] with an isolated singular zero p ∈ C n , by employing some differential operations on the input polynomials or on thelinear combinations of the related polynomials, we propose a new method to constructa new square system F ′ ⊂ C [ x ], which satisfies that p is a simple zero of F ′ = . Wealso prove the existence of F ′ and show some properties of it.First, let’s see a simple example to explain our idea. Example 3.
Let F = { f = x − y + x , f = x − y + y } with a 3-fold isolated zero p = (0 , . Obviously, f and f are already regular at p . However, it is easy tofind that the terms with degree one of f and f are linear dependent. Using f − f to eliminate these terms of degree one, we get the polynomial h = y − x and twonew polynomials ∂h∂x = − x , ∂h∂y = 2 y , which are both regular at p . Selecting the twopolynomials f and ∂h∂y , we get a new square system F ′ = { x − y + x , y } , which hasa regular zero p = (0 , . Moreover, it’s a system without perturbation. Based on the idea in the above simple example, now we show our technique toconstruct a deflated square system below.Assume that we have got the polynomials g , . . . , g s , which are regular at p , fromthe input polynomials f , . . . , f s such thatrank( J ( g , . . . , g s )( p )) = s. Given one more polynomial f s +1 , we want to compute another polynomial g s +1 , s.t.rank( J ( g , . . . , g s , g s +1 )( p )) = s + 1 . Using only g , . . . , g s and f s +1 , we may not get the suitable g s +1 ifdim V ( g , . . . , g s , f s +1 ) > dim V ( f , . . . , f s , f s +1 ) . The input polynomials are needed in this case. Thus, we use { g , . . . , g s }∪{ f , . . . , f s +1 } to compute g s +1 . We will show how to compute g s +1 in the following lemma.7 emma 8. Let F = { f , . . . , f s , f s +1 , . . . , f s + k } ⊂ C [ x , . . . , x n ]( k ≥ and p ∈ C n , s.t. F ( p ) = and rank( J ( F )( p )) = s . Assume dim V ( F ) ≤ n − s − and deg( F ) = m ( m > . Then we can get a polynomial system F ′ = { f ′ , . . . , f ′ s , f ′ s +1 } , which satisfies:1. rank( J ( F ′ )( p )) = s + 1 , and f ′ j ∈ ∆ p ( F )(1 ≤ j ≤ s + 1) ;2. deg( F ′ ) ≤ m .Proof. Without loss of generality, we assume that p is the origin andrank( J ( f , . . . , f s )( p )) = s. (3)In the following, we consider the case of s >
0, since if s = 0, we can use the operators ∂ on f i (1 ≤ i ≤ s + k ) to get some polynomials, which are regular at p .To construct a polynomial system F ′ , s.t. rank( J ( F ′ )( p )) = s + 1, we consider therest polynomials { f s +1 , . . . , f s + k } . Our proof is constructive.First, f i ( i = 1 , . . . , s ) has the form : f i = n X k =1 a ik x k + T i , where T i ∈ C [ x ] and deg(T i ) = 0 or deg(T i ) ≥
2. It’s easy to know that the row vector a i = ( a i , . . . , a in )(1 ≤ i ≤ s ) of the Jacobian matrix of ( f , . . . , f s ) at p are linearindependent since (3) holds.Therefore, we can consider the following linear coordinate transformation L : L : y i = n P k =1 a ik x k , ≤ i ≤ sy i = x i , i = s + 1 , . . . , n. With a realignment of the sequence of the variables { x , . . . , x n } , we can always havethe first s columns of the coefficient matrix of L being linear independent. Then L is invertible. Denote the inverse of L as L − . Let p ′ = L ( p ) and F i = L − ( f i ) ∈ C [ y , . . . , y n ]. We have: F i = y i + L − (T i ) , i = 1 , . . . , s,F s + i = s X j =1 b i,j y j + L − (T s + i ) , i = 1 , . . . , k. (4)Since dim V ( F ) ≤ n − s − L − is invertible, it is obvious thatdim V ( F , . . . , F s + k ) ≤ n − s − . Therefore, noticing that the terms with degree one of all F i ( i = 1 , . . . , s + k ) in (4) con-tain only s variables, there must be at least one of { L − (T i ) , i = 1 , . . . , s + k } containingat least one term, which has the form of y d s +1 s +1 y d s +2 s +2 · · · y d n n , such that n P j = s +1 d j > L − (T i )(1 ≤ i ≤ s + k ) contain no termsof the form of y d s +1 s +1 y d s +2 s +2 · · · y d n n . Then, all the terms of F i (1 ≤ i ≤ s + k ) have the formof y d · · · y d s s y d s +1 s +1 · · · y d n n , s P j =1 d j >
0. In this case, the system { F , . . . , F s + k } vanisheson { y = 0 , . . . , y s = 0 } . Thus, we can verify easily that dim V ( F , . . . , F s + k ) = n − s ,which contradicts with dim V ( F , . . . , F s + k ) ≤ n − s −
1. Thus, the claim is true.Without loss of generality, we suppose that L − (T l )( l ∈ { , . . . , s + k } ) has the termwith the form of y d s +1 s +1 y d s +2 s +2 · · · y d n n and take the variable y s +1 for example, i.e. d s +1 = 0.Further, we ask for the term with a lowest degree among all this kind of terms anddenote the lowest degree as d . Then, we have: F ′ s +1 = ∂ d − F l ∂y d s +1 − s +1 y d s +2 s +2 · · · y d n n = n X i =1 γ i y i + T ′ l , d = n X j = s +1 d j . (5)It is easy to see that γ s +1 = 0 , deg( F ′ s +1 ) < deg( F l ).Thus, we have a new system { F , . . . , F s , F ′ s +1 } . It’s easy to check thatrank( J ( F , . . . , F s , F ′ s +1 )( p ′ )) = s + 1 . Finally, after doing the transformation L on F i (1 ≤ i ≤ s ) and F ′ s +1 , we have the newsystem F ′ = { f ′ , . . . , f ′ s +1 } , where f ′ i = L ( F i ) = f i ( i = 1 , . . . , s ) , f ′ s +1 = L ( F ′ s +1 ) with rank( J ( f ′ , . . . , f ′ s , f ′ s +1 )( p )) = s + 1 . By the definition of ∆ p ( F ) (see Definition 6), we can find that f ′ i ∈ ∆ p ( F )(1 ≤ i ≤ s + 1). Therefore, we finished the first part of the proof.From Lemma 7 and (5), it is easy to know that the maximal degree of f ′ i ( i =1 , . . . , s + 1) is no larger than m . That is, deg( F ′ ) ≤ m . Thus, we complete theproof.Now, we consider constructing a square system, which is regular at an isolatedsingular zero of the input system. Theorem 9.
Let F = { f , . . . , f N } ⊂ C [ x ]( N ≥ n ) be a polynomial system. p ∈ C n anisolated singular zero of F = and deg( F ) = m . Then there exists a square polynomialsystem F ′ = { f ′ , . . . , f ′ n } ⊂ ∆ p ( F ) , s.t.1. p is an isolated regular zero of F ′ = ;2. deg( F ′ ) ≤ m .Proof. Without loss of generality, assume that p is the origin. In the following, we willconstruct a square system by the polynomials in ∆ p ( F ).First, we can choose a system F from F , denoted as F = { f , . . . , f r } , whoseJacobian matrix at p has a maximal rank, s.t.rank( J ( f , . . . , f r )( p )) = rank( J ( F )( p )) = r, ≤ r ≤ n. r = n , we finish the proof. Noticing that when r = 0, we need only considering atleast one of the polynomials in f , . . . , f N and can always get at least one polynomial,which is regular at p by Lemma 5. Thus, in the following, we consider the case of1 ≤ r < n .First, considering the system { f , . . . , f r , f r +1 , . . . , f N } , by Lemma 8, we can get asystem F = { f (1)1 , . . . , f (1) r , f (1) r +1 } , s.t. F ( p ) = and rank( J ( f (1)1 , . . . , f (1) r , f (1) r +1 )( p )) = r + 1 . Using the technique in Lemma 8, when considering the system F ∪{ f (1)1 , . . . , f (1) r , f (1) r +1 } ,we can get a system F = { f (2)1 , . . . , f (2) r +1 , f (2) r +2 } , s.t. F ( p ) = and rank( J ( f (2)1 , . . . , f (2) r +1 , f (2) r +2 )( p )) = r + 2 . Repeat this process n − r times and finally, we get a square system F n − r = { f ( n − r )1 , f ( n − r )2 , . . . , f ( n − r ) n } , s.t. F n − r ( p ) = and rank( J ( f ( n − r )1 , f ( n − r )2 , . . . , f ( n − r ) n )( p )) = n. Thus, our final square system F ′ = { f ′ = f ( n − r )1 , f ′ = f ( n − r )2 , . . . , f ′ n = f ( n − r ) n } . By Lemma 8, it is obvious that the maximal degree of f ′ i (1 ≤ i ≤ n ) is no larger than m . That is, deg( F ′ ) ≤ m . Remarks.
1. In the above construction process, we repeat n − r times to get thedeflated system F ′ . If considering all the variables simultaneously, we get more than oneeligible polynomial each time in (5). Thus, the number of times in actual computationis less than n − r .2. In the beginning of our construction, we also can compute all the related poly-nomials of all the input polynomials, which are regular at p . Then, we choose a systemfrom these polynomials, whose Jacobian matrix at p has a maximal rank. That’s tosay that we make r as big as possible to reduce our repeating steps.Theorem 9 tells us that given a polynomial system F with an isolated singular zero p , we can construct a new square system F ′ , which is regular at p and moreover, thedegree of the polynomials in F ′ does not increase. We give an example to illustrate ourmethod. Example 4. (DZ2 [2]) Let F = { f = x , f = x x + x , f = x + x − x − x } ,which has a 16-fold zero p = (0 , , − . The maximal degree of f , f , f is 4. First, bythe Taylor expansions of f , f , f at p , we have: f = x ,f = x x + x ,f = − ( x + 1) − x + ( x + 1) − x . t’s easy to find that only f is regular at p . Since s = rank( J ( F )( p )) = 1 and dim V ( f , f ) = 1 , we consider the system { f , f } directly. By Lemma 8, we have asystem { f (1)1 = f , f (1)2 = d (2 , , x ( f ) = x } , which satisfies rank( J ( f (1)1 , f (1)2 )( p )) = 2 .Next, we continue to consider the system { f (1)1 , f (1)2 }∪ F . Since dim V ( f (1)1 , f (1)2 , F ) =0 , by Lemma 8, we have a system { f (2)1 = f , f (2)2 = x , f (2)3 = d (3 , , x ( f ) = 4 x } , which satisfies rank( J ( f (2)1 , f (2)2 , f (2)3 )( p )) = 3 .Thus, we acquire the final square system F ′ = { f , x , x } . It’s easy to check that p is a simple zero of F ′ = and the degree of every polynomial in F ′ is no more than4. In this example, we repeat n − s = 2 times to acquire the final square system F ′ .In fact, as what we say in Remark 2 of Theorem 9, computing twice is not necessary.Noticing that when computing f (1)2 = d (2 , , x ( f ) = x , we also can get d (1 , , x ( f ) =2 x . They are both regular at p . It is easy to check thatrank( J ( f (1)1 , f (1)2 , d (1 , , x ( f ) = 2 x )( p )) = 3 . Thus, we obtain another square system F ′ = { f , x , x } . In the section, by introducing some new variables to represent the coefficients of thelinear combinations, we give an effective version of our deflation method. The deflatedsystem produced by our deflation method has a simple zero, whose partial projectioncorresponds to the isolated singular zero of the input system. Furthermore, we alsoanalyze the influences of the given tolerances θ and ε to our method and show how toadjust their values to get a deflated system as exact as possible. Given a polynomial system F with an isolated singular zero p , by employing somedifferential operations on the input polynomials directly or on the linear combinationsof the related polynomials, we give a method to construct a new polynomial system F ′ in Section 3, which satisfies that p is a simple zero of F ′ = .However, in practice, we can just get an approximate zero ˜ p . As what we say inExample 1, the inexact value of ˜ p usually brings perturbations in the coefficients whendoing the Taylor series expansions of the input polynomials at ˜ p . Therefore, we can notdo exact computations when adding two or more polynomials together. The inexactcomputations would produce a perturbed system of F ′ , which will lead to a bad finaldeflation result. We show an example to illustrate this case.11 xample 5. Continue with Example 3. Given an approximate zero ˜ p = (0 . , . . Using the method in Theorem 9, we have ˜ h = f + ˜ αf . By solving a Least Squareproblem, we can get ˜ α = − . . Finally, we get an inexact system e F ′ = { x − y + x , y − . } . Obviously, we can not get a good result by the system e F ′ . With a simple analysis, we can find that we couldn’t get an exact coefficient α ofthe linear combination of the polynomials with an approximate zero.In the following, by introducing some new variables to represent the coefficients ofthe linear combinations, we give an effective version of our deflation method. Finally,the effective version of our deflation method will usually produce an exact deflatedsystem, which has a simple zero, whose partial projection corresponds to the isolatedsingular zero of the input system. Furthermore, we also provide the size bound of ourmethod. To our knowledge, it is the first time that considering the deflation of thepolynomial system from the perspective of linear combination.Similarly, before giving our theoretical results, we also show our main idea with asimple example first. Example 6.
Still consider Example 3. Once given an approximate zero of the inputsystem: ˜ p = (0 . , . , by Example 5, we know the coefficient ˜ α is inexact.Now we introduce a new variable α . Let h = f + α f and compute ∂h∂x = 1 + α (2 x + 1) , ∂h∂y = 2 y − − α . Similar as in Example 5, we have ˜ α = − . . Given a tolerance ε = 0 . ,we have rank( J ( f , ∂h∂x , ∂h∂y )(˜ p , ˜ α ) , ε ) = 2 < . Do once again this process and introduce two new variables α , α . Let g = ∂h∂y + α f + α ∂h∂x and compute ∂g∂x = 2 α α + α (2 x + 1) , ∂g∂y = 2 − α , ∂g∂α = α (2 x + 1) − . By solving another Least Square problem, we get the approximate values: ˜ α = 1 . , ˜ α = 1 . . Then, we have rank( J ( f , ∂f∂x , ∂g∂x , ∂g∂y , ∂g∂α )(˜ p , ˜ α , ˜ α , ˜ α ) , ε ) = 5 . hus, we get a polynomial system e F ′ ( x , α ) = { f , ∂h∂x , ∂g∂x , ∂g∂y , ∂g∂α } , whose Jacobian matrix at (˜ p , ˜ α , ˜ α , ˜ α ) has a full rank under the tolerance ε . In fact,we can find that (0 , , − , , is a simple zero of e F ′ ( x , α ) = and the partial projection (0 , of (0 , , − , , corresponds to the isolated singular zero p of the input system F . Given a polynomial system with an isolated zero, we have the following lemma.
Lemma 10. [12] Let F = { f , . . . , f n } ⊂ C [ x ] be a polynomial system. p ∈ C n is anisolated singular zero of F = . λ = ( λ , . . . , λ n ) ∈ C n is a nonzero row vector, whichsatisfies J ( F )( p ) λ T = . For the new system G = { λ ∂f ∂x + . . . + λ n ∂f ∂x n , . . . , λ ∂f n ∂x + . . . + λ n ∂f n ∂x n } , we have the multiplicity of p in { F , G } = is lower than the multiplicity of p in F = . Remarks.
In Remark 2.1 of [8], the authors mentioned that deflation could also beconstructed using the left null space. That is, we can replace G by the following system G ′ = { λ ∂f ∂x + . . . + λ n ∂f n ∂x , . . . , λ ∂f ∂x n + . . . + λ n ∂f n ∂x n } , (6)where λ J ( F )( p ) = . Furthermore, we have the following lemma. Lemma 11.
Let F = { f , . . . , f n } ⊂ C [ x ] be a polynomial system. p ∈ C n be anisolated singular zero of F = . Assume rank( J ( f , . . . , f s )( p )) = rank( J ( F )( p )) = s .Consider the augmented system G = { f , . . . , f n , h , . . . , h n } ⊂ C [ x , α ] , where h j = α ∂f ∂x j + · · · + α s ∂f s ∂x j + ∂f s +1 ∂x j , j = 1 , . . . , n. Then, we have:1. there exists a unique ˆ α ∈ C s such that the system G has an isolated zero at ( p , ˆ α ) .2. the multiplicity of G at ( p , ˆ α ) is lower than that of F at p .Proof. Let A ij ( x ) = ∂f i ∂x j ∈ C [ x ] , a ij = ∂f i ( p ) ∂x j ∈ C , i = 1 , . . . , s + 1 , j = 1 , . . . , n. A = ( a ij ) , i = 1 , . . . , s, j = 1 , . . . , n and the row vector b =( a s +1 , , . . . , a s +1 ,n ).On one hand, when we fix x = p , the system H ( p , α ) = { h j ( p , α ) = a j α + . . . + a sj α s + a s +1 ,j , j = 1 , . . . , n } is a linear system with respect to the variables α , . . . , α s . Furthermore, it is easy tocheck that ˆ α , which is determined by AA T ˆ α = − Ab T , is the unique zero of H ( p , α ) = . That is, there exists a unique ˆ α such that the system G has an isolated zero at( p , ˆ α ).On the other hand, with the row operations, we could reduce the system G to thesystem { α = l ( x ) , . . . , α s = l s ( x ) } , where l i ( x ) are rational expressions and ˆ α i = l i ( p ). Thus, considering the multiplicityof G at ( p , ˆ α ) is equivalent to considering the multiplicity of G ( x , ˆ α ) at p . Note that( ˆ α , . . . , ˆ α s , , , . . . , J ( F )( p ) = . By Lemma 10 and (6), we know the second partholds. Thus, we finished the proof.In the above lemma, we construct n new polynomials h , . . . , h n . In fact, we canget them from the following way. Note thatrank( J ( f , . . . , f s )( p )) = rank( J ( F )( p )) = s. We know easily that J ( f s +1 )( p ) and J ( f )( p ) , . . . , J ( f s )( p ) are linearly dependent.Thus, we can do the linear combination between f s +1 and f , . . . , f s to eliminate thislinear relationship. Let g = f s +1 + s X i =1 α i f i , (7)where new variables α i are used to represent the coefficients of the linear combination.Compute all the derivatives of g with respect to the variables x , . . . , x n and we get h j = ∂g∂x j = α ∂f ∂x j + · · · + α s ∂f s ∂x j + ∂f s +1 ∂x j , j = 1 , . . . , n. Thus, the above lemma tells us that after doing the linear combination of polyno-mials between f s +1 and f , . . . , f s , we get an augmented system G , which satisfies thatthe multiplicity of G at ( p , ˆ α ) is lower than that of F at p . By repeating using thelinear combination between polynomials in the original system and its related deriva-tives, we can construct a final deflated system, which processes an isolated simple zero.Denote µ be the multiplicity of F at p . We do this repetitive process at most µ times.Further, based on Lemma 11, we have the following theorem. Theorem 12.
Let F = { f , . . . , f n } ⊂ C [ x ] be a polynomial system. p ∈ C n be anisolated singular zero of F = . Denote m = deg( F ) . Then there exists a squarepolynomial system e F ′ ( x , α ) = { g , . . . , g t } ⊂ C [ x , α ] , s.t.1. ( p , ˆ α ) ∈ C t is an isolated simple zero of e F ′ ( x , α ) = ; . t is bounded by µ n , where µ is the multiplicity of p in F ;3. deg( e F ′ ( x , α )) ≤ m . Next, based on Lemma 11 and Theorem 12, we give an effective algorithm tocompute a deflated square system from the input system with an approximate isolatedsingular zero below. It is an effective version of Lemma 8. θ is a tolerance to detect theregularity of the polynomials and we will talk about it in next subsection. ε is anothertolerance to judge the numerical rank of the Jacobian matrix at an approximate zeroand we also talk about it in next subsection. Algorithm 1 CDSS : Compute a deflated square system.
Input: a polynomial system F := { f , . . . , f n } ⊂ C [ x ], an approximate isolated singularsolution ˜ p ∈ C n , two tolerances θ and ε . Output: a square polynomial system e F ′ ( x , α ) := { ˜ f , . . . , ˜ f t } ⊂ C [ x , α ] and a point ˜ α , s.t.(˜ p , ˜ α ) is an approximate regular zero of e F ′ ( x , α ) = . Compute G = { d γ x ( f ) | d γ x ( f ) is θ -regular at ˜ p , f ∈ F } ; Let H := F ∪ G , X := x ; while rank( J ( H )(˜ p ) , ε ) = | X | do Compute r := rank( J ( H )(˜ p ) , ε ); Choose any H := { h , . . . , h r } ⊂ H , s.t. rank( J ( H )(˜ p ) , ε ) = r ; Choose h r +1 := H \ H , s.t. dim V ( H , h r +1 ) = n − r − Let g := h r +1 + r P j =1 α j h j ; Compute ˜ α := LeastSquares (( J ( H , h r +1 )(˜ p )) T ( α , T = ); Compute g := J ( g ) , . . . , g n := J n ( g ); Set H := { H , g , . . . , g n } , X := x ∪ α and ˜ p := (˜ p , ˜ α ); end while Return: a square system e F ′ ( x , α ) = { H , g , . . . , g n } and a point ˜ α . Remarks.
1. The termination and correctness of the algorithm is guaranteed byLemma 11 and Theorem 12.2. In the above algorithm, we compute polynomials of every f i , which are regular at p at the beginning. Then, we put all these polynomials together to compute a system F , such that the rank of its Jacobian matrix at p is maximal. This operation canmake r as big as possible. In some cases, we have r = n , which means we need notintroduce new variables, such as Example 8. The aim of this preprocessing operationcan speed up our algorithm.Now, we give two examples to illustrate Algorithm 1. Example 7.
Consider a polynomial system F = { f = − + x + 2 x + 3 x + 4 x − x , f = x − x − x − x + 2 x x + 3 x x + 4 x x , f = 8 − x − x + 2 x +4 x x − x x , f = − x + 2 x + 4 x + 4 x } . Given an approximate singular zero ˜ p = (˜ p , ˜ p , ˜ p , ˜ p ) = (1 . , − . , − . , . f F = and the tolerance ε = 0 . .First, we have the Taylor expansion of f at ˜ p : f = 3 · − − · − ( x − ˜ p ) + 0 . x − ˜ p ) + 0 . x − ˜ p ) − . x − ˜ p )( x − ˜ p ) − ( x − ˜ p )( x − ˜ p ) . Consider the tolerance θ = 0 . . Since | f (˜ p ) | < θ, (cid:12)(cid:12)(cid:12)(cid:12) ∂f ∂x i (˜ p ) (cid:12)(cid:12)(cid:12)(cid:12) < θ ( i = 1 , , , , (cid:12)(cid:12)(cid:12)(cid:12) ∂ f ∂x (˜ p ) (cid:12)(cid:12)(cid:12)(cid:12) > θ, we get a polynomial ∂f ∂x = − x + 4 x − x x , which is θ -regular at ˜ p . Similarly, by the Taylor expansion of f , f , f at ˜ p , we havethat f , f , f are all θ -regular at ˜ p .Thus, by Algorithm 1, we have G = { f , f , − x + 4 x − x x , f } . Compute r = rank( J ( G )(˜ p ) , ε ) = 3 . We can choose H = { h = f , h = f , h = − x +4 x − x x } from H = G ∪ F .To h = f ∈ H \ H , let g = h + α h + α h + α h . First, by solving a Least Square problem:
LeastSquares (( J ( H , h )(˜ p )) T [ α , α , α , T = 0) , we get an approximate value: ( ˜ α , ˜ α , ˜ α ) = ( − . , − . , . . Then, compute g = ∂g∂x = 3 + 32 α + α + 4 α − α x + 2 α x + 3 α x + 4 α x − α x ,g = ∂g∂x = 2 + 2 α − α + 2 α x ,g = ∂g∂x = 4 + 3 α − α + 3 α x ,g = ∂g∂x = 4 + 4 α − α + 4 α + 4 α x − α x , and we get the polynomial set H ′ = { h , h , h , g , g , g , g } , which satisfies rank( J ( H ′ )(˜ p , ˜ α , ˜ α , ˜ α ) , ε ) = 7 . Thus, we get the final square system e F ′ ( x , α ) = H and the point ˜ α = ( ˜ α , ˜ α , ˜ α ) =( − . , − . , . .
16n this example, given the input polynomial system F with an approximate singularzero ˜ p , we can get a final square system by Algorithm 1 with only one step. In fact, α is not necessary to be introduced in this example by noticing that we can acquirea needed square system e F ′ ( x , α ) by using F = f + α f + α f . We give anotherexample to illustrate the case that we do not introduce new variables. Example 8. (DZ2) Continue with Example 4. Given an approximate isolated singularzero ˜ p = (˜ p , ˜ p , ˜ p ) = (0 . , . , − . and a tolerance ε = 0 . , we use the Taylor series to expand f i ( i = 1 , , at ˜ p andcompare all the coefficients with a tolerance θ = ε . For f , we have f = 2 . · − + 1 . · − ( x − ˜ p ) + 2 . · − ( x − ˜ p ) + 0 . · − ( x − ˜ p ) + ( x − ˜ p ) . It is obvious that only the absolute value of the coefficient of ( x − ˜ p ) is biggerthan θ . Therefore, compute d (3 , , x ,x ,x ) ( f ) = 4 x , which is θ -regular at ˜ p . Similarly,for f , f , we have the corresponding polynomial(s): { x , x } and f . Thus, we have G = { x , x , x , f } . It is easy to check that r = rank( J ( G )(˜ p ) , ε ) = rank( J (4 x , x , f )(˜ p ) , ε ) = 3 . Thus, we get the needed square system e F ′ ( x ) = G = { x , x , f } . In the above two examples, we assume that we have a right judgement on thetolerances θ and ε . In fact, the choice of the tolerances θ and ε is important to ouralgorithm. Next, we give some analysis of them. θ and ε As what we say in Example 1, θ is an important parameter in deciding if a polyno-mial is θ -regular at ˜ p . The other important parameter involved in our actual computa-tion is ε , which is used to judge the numerical rank of the Jacobian matrix. Therefore,in this section, we will give some analysis about the parameters θ and ε .First, we point out that θ is related to the absolute values of the coefficients of theTaylor expansion of the polynomial at its approximate zero.For example, given a polynomial f = x + 10000 y with an approximate zero˜ p = (˜ p , ˜ p ) = (0 . , − . f at ˜ p : f = 0 . . x − ˜ p ) − . y − ˜ p ) + ( x − ˜ p ) + 10000( y − ˜ p ) . Given θ = 0 .
5, we have | f (˜ p ) | < θ, | ∂f∂x (˜ p ) | < θ, | ∂f∂y (˜ p ) | > θ. Thus, we draw the conclusion that f is θ -regular at ˜ p . However, considering thatthe lowest degree of f is 2, we know that f is singular at the exact zero p = (0 , p . That means θ is not chosenproperly. The main reason is that the coefficient of f has a great fluctuation or theaccuracy of ˜ p is not high enough. If given another approximate zero ˜ q = (˜ q , ˜ q ) =(0 . , − . f = 1 . · − +0 . x − ˜ q ) − . y − ˜ q )+( x − ˜ q ) +10000( y − ˜ q ) . By this time, using the same θ = 0 .
5, we have f is θ -singular at ˜ q , which is the samejudgement as the exact case of p .In actual computation, to deal with this case, we give one solution: For a nonzeropolynomial f ∈ C [ x ], let Γ f be a set of the absolute values of all the coefficients of f .We denote the maximal and minimal ones inside Γ f as M = max(Γ f ) and m = min(Γ f )respectively. If m/M ≤ − a , we regard that the coefficients of f fluctuate a lot andtake ǫ = ( m + M ) / M ; Else, we take θ = ( m + M ) / (2 M × a ), where a ∈ N isrelated to the precision of the given approximate zero ˜ p . For example, if the accuracyof the given approximate zero ˜ p has three significant digits, we can take a = 3. Ofcourse, we can overcome this problem thoroughly by refining the approximate zero toa higher precision with the input system if the Jacobian matrix of the system at ˜ p isnumerically nonsingular.In summary, the reason for the above situation is that we judge a polynomial, whichis singular at the exact zero p , as a polynomial being θ -regular at the approximate zero˜ p . The other situation is that a polynomial, which is regular at the exact zero p , maybe judged as a polynomial being θ -singular at the approximate zero ˜ p .For example, consider the polynomial f = x + x + 10000 y with the approximatezero ˜ q = (˜ q , ˜ q ) = (0 . , − . f = 5 . · − +0 . x − ˜ q ) − . y − ˜ q )+( x − ˜ q ) +10000( y − ˜ q ) . Still use θ = 0 . f is θ -singular at ˜ q . In fact, f isregular at p = (0 , θ .When we take θ = 0 .
05, we will acquire the appropriate result.From the above analysis about the tolerance θ , we know that the choice of θ iscrucial to our method. We give a further theoretical analysis about the tolerance θ below. Here, we assume that the judgement of the other tolerance ε , which is used todecide the numerical rank of the Jacobian matrix at the approximate zero, is correct.Let θ be a tolerance. Assume that we have computed an intermediate system H = { h , . . . , h s } ⊂ C [ x ′ ]. Denote x ′ = ( x , α ). Assume that p is an isolated singularzero of the original system. The exact value of α related to the coefficients of linearcombinations is ˆ α . Denote p ′ = ( p , ˆ α ). Let ˜ p ′ be an approximate zero of H related to p ′ such that rank( J ( H )(˜ p ′ )) = s. Next, we consider one more polynomial h ∈ C [ x ′ ]. If h , which is regular at p ′ , is judgedas being θ -singular at ˜ p ′ , we may get a perturbed system finally. Specifically, computethe Taylor expansion of h at ˜ p ′ : h = h (˜ p ′ ) + X j ∂h (˜ p ′ ) ∂x j ( x j − ˜ p ′ j ) + X i,j ∂h (˜ p ′ ) ∂x i ∂x j ( x i − ˜ p ′ i )( x j − ˜ p ′ j ) + · · · . h is θ -singular at ˜ p ′ , we know that | h (˜ p ′ ) | < θ and all | ∂h (˜ p ′ ) ∂x j | < θ . Thus, wecompute ∂h∂x j = ∂h (˜ p ′ ) ∂x j + 2 X i ∂ h (˜ p ′ ) ∂x i ∂x j ( x i − ˜ p ′ i ) + · · · . (8)If there exists some j such thatrank( J ( H , ∂h∂x j )(˜ p ′ )) = s + 1 and ∂h ( p ′ ) ∂x j = 0 , we may derive a perturbed system in the end, where ∂h∂x j has and only has one perturbedterm ∂h ( p ′ ) ∂x j compared to the polynomial ∂h∂x j − ∂h ( p ′ ) ∂x j which vanishes at p ′ .For other cases, ifrank( J ( H , ∂h∂x j )(˜ p ′ )) = s + 1 and ∂h ( p ′ ) ∂x j = 0 , it is clear that ∂h∂x j vanishes at p ′ . Thus it is exact. Ifrank( J ( H , ∂h∂x j )(˜ p ′ )) = s ( ∀ j ) , according to our constructive method, we should do the linear combination f = ∂h∂x j + s X i =1 α i h i ( for some j )and compute its derivatives. Thus the perturbed term ∂h ( p ′ ) ∂x j disappears. We will get anexact polynomial which vanishes at p ′ in the end. Notice that if h i ’s have perturbedterms, which are constants h i ( p ′ ). We know that if we compute the derivatives of f , these terms will disappear. Thus whether h i ’s have perturbed terms or not, thepolynomials in the final deflated system derived by the linear combinations vanish atthe exact zero p ′ .Now we consider the case that h is regarded as θ -regular at ˜ p ′ while it is singularat p ′ . If rank( J ( H , h )(˜ p ′ )) = s, we will do the linear combination of h and h , . . . , h s and compute its derivatives. Itis obvious that this operation has no influence on our result. Usually the caserank( J ( H , h )(˜ p ′ )) = s + 1will not happen. It is related to the numerical computation of the rank of the Jacobianmatrix of ( H , h ) at ˜ p ′ .As a summary of the foregoing analysis, we have:Let F = { f , . . . , f n } ⊂ C [ x ] be a polynomial system. ˜ p ∈ C n is an approximatezero of F = and θ is a tolerance. According to our method, we acquire a final system e F ′ ⊂ C [ x , α ]. During we compute the final system e F ′ ,19. if we judge a polynomial, which is singular at the exact zero p , as being θ -regularat ˜ p , the final system e F ′ is accurate.2. if we judge a polynomial, which is regular at the exact zero p , as being θ -singularat ˜ p , the final system e F ′ = e F + ϑ , is a perturbed system, where e F is an accuratesystem and ϑ is the perturbed term, which satisfies max i | ϑ i | < θ .In actual computation, to make our method as accurate as possible, we give anadaptive adjustment step at the end of our algorithm. To be specific, assume that theinitial tolerance θ = θ . After the refining steps, denote the refined zero as ¯ p . Wecompute the Taylor expansions of all the related polynomials in computing the system e F ′ at ¯ p , including all the input polynomials. We denote the maximal absolute valueof both the coefficients of the polynomials, which are judged as θ -singular at ˜ p andthe polynomials, which are judged as θ -regular at ˜ p , as θ . It is also the term named“Max err” in Tables 1 and 2 in the next section.It is easy to imagine that θ ≤ θ usually. If θ has a very higher precision than θ ,such as θ = 10 − and θ = 10 − , we are sure that our conclusion is exact. If θ > θ or θ still has a bad accuracy, such as θ = 10 − and θ = 10 − or θ = 10 − , we willtake a smaller θ < min { θ , θ } and repeat our method again.After repeating our method several times, if θ is still bad, we will merely get aperturbed system.Now, we give two examples to explain the above analysis. Example 9.
Given a polynomial system F = { f = x + x +10000 y , f = x +10000 y } with an approximate zero ˜ p = (˜ p , ˜ p ) = (0 . , − . . Consider the tolerances ε = 0 . and θ = 0 . . By the Taylor expansions of f i at ˜ p , weknow that f , f are both θ -regular at ˜ p .Next, according to Algorithm 1, we compute rank( J ( F )(˜ p ) , ε ) = 2 . Thus, we can use Newton’s method to refine ˜ p to a higher accuracy and get ˜ p ′ = (0 . , − . . At this time, it’s easy to check that f is θ -regular at ˜ p ′ and f is θ -singular at ˜ p ′ .Therefore, for f , we have ∂f ∂x = 2 x, ∂f ∂y = 20000 y, which are both θ -regular at ˜ p ′ . Furthermore, rank( J ( f , ∂f ∂y ) , ε ) = 2 . hus, we get the final system e F ′ = { f , y } . After applying Newton’s method, weget the refined zero ¯ p = (¯ p , ¯ p ) = 10 − · (0 . , .At last, we check if our chosen θ is proper. We compute the Taylor expansion ofall the polynomials, which is judged as θ -singular at ˜ p , at the refined zero ¯ p and get: f = 2 . · − + 1 . · − · ( x − ¯ p ) + ( x − ¯ p ) + 20000 · ( y − ¯ p ) . Thus, we have
Max err := max { . · − , . · − } = 1 . · − ≪ θ, which means that our final system e F ′ is more accurate than before. Example 10.
Consider the system F = { f = x + x + 2 xy + 10000 y , f = x + x +2 xy + 10000 y } with an approximate zero ˜ p = (˜ p , ˜ p ) = (0 . , − . . Let the tolerances ε = 0 . and θ = 0 . . Similarly, by the Taylor expansions of f i at ˜ p , we know that f is θ -regular at ˜ p and f is θ -singular at ˜ p . Therefore, we have ∂f ∂x = 120 + 2 x + 2 y, ∂f ∂y = 2 x + 20000 y. Compute rank( J ( f , ∂f ∂x ) , ε ) = 2 . Thus, we get the final system e F ′ = { f ,
120 + 2 x + 2 y } . Obviously, e F ′ is a perturbed system and ϑ = is the perturbed term, which satisfies | ϑ | < θ . It’s easy to imagine that with e F ′ , we could not get a good result. The mainreason is that θ = 0 . is too big, which leads to a wrong judgement on whether f is θ -regular at ˜ p .If given another smaller tolerance θ ′ = 0 . , we will get a right judgement that f is θ ′ -regular at ˜ p . Thus, we consider the linear combination of f and f . Let f = f + αf and compute g = ∂f∂x = 120 + 2 x + 2 y + α (2 x + 2 y + 1) ,g = ∂f∂y = 2 x + 20000 y + α (2 x + 20000 y ) , where α is a new variable and its initial value ˜ α = − . . Compute rank( J ( f , g , g ) , ε ) = 3 . hus, we get the final system e F ′ = { f , g , g } . Similarly, we consider applying New-ton’s method on the final system e F ′ and get the refined zero: ¯ p = (0 . , . , − . . Then, we check the coefficients of the terms with degree one of the Taylor expansion of f at ¯ p and get Max err := { , , } = 0 ≪ θ ′ = 0 . . Thus, we are sure that our final system e F ′ is accurate. Here, “0” is not exact zero butmeans in Matlab machine accuracy. From the above two examples, we can see that once given an appropriate tolerance θ , we can make sure that our final system is accurate. Otherwise, what we acquired isjust a perturbed system, such as the system e F ′ in Example 10.Next, we continue analyzing the other tolerance ε , which is used to judge thenumerical rank of a matrix. That is, we determine the numerical rank by comparingthe absolute values of the singular values of the Jacobian matrix at approximate zerowith the tolerance ε . Specifically, assume that we have computed an intermediatesystem H = { h , . . . , h s } ⊂ C [ x ′ ]. Denote x ′ = ( x , α ). Assume that p is an isolatedsingular zero of the original system. The exact value of α related to the coefficients oflinear combinations is ˆ α . Denote p ′ = ( p , ˆ α ) ∈ C t . Let ˜ p ′ ∈ C t be an approximatezero of H related to p ′ such thatrank( J ( H )(˜ p ′ ) , ε ) = s. Next, we consider one more polynomial h s +1 ∈ C [ x ′ ]. Given the tolerance θ , we cancompute a polynomial h from h s +1 , which is θ -regular at ˜ p ′ . Denoterank( J ( h , . . . , h s , h )( p ′ )) = r , rank( J ( h , . . . , h s , h )(˜ p ′ ) , ε ) = r . For simplicity, we denote the deflated system as H ′ , which comes from { H , h s +1 } afterone step deflation, and its corresponding exact zero as q , whose partial projection is p ′ . According to the above analysis of θ , for h s +1 , we have the following cases:1. if θ is chosen properly, that is, we judge h s +1 , which is regular or singular at p ′ ,as being θ -regular or θ -singular at ˜ p ′ respectively, we know that h is regular at p ′ . Thus, we have:(a). if r = r , we, of course, get an exact system H ′ . That is, H ′ ( q ) = .(b). if r < r , according to our algorithm, we consider do the linear combination: g = h + s X j =1 α j h j and compute all the derivatives of g with respect to all variables: g i = ∂g∂x ′ i , i = 1 , . . . , t . Correspondingly, H ′ = { h , . . . , h s , g , . . . , g t } . Note that22 ( h )( p ′ ) and J ( h )( p ′ ) , . . . , J ( h s )( p ′ ) are actually linear independent, whichmeans that the equations ( α , . . . , α s , J ( H )( p ′ ) = has no solution. Thus,although we can give the initial value ˜ α j of α j by solving a Least Squaresproblem, the linear independent will bring us some inexact polynomials g i ,which means g i ( q ′ ) = 0. Further, we may get a perturbed system H ′ . Thatis, H ′ ( q ) = .(c). if r < r , we will add h to the system H directly and get an exact system H ′ = { h , . . . , h s , h } .2. if θ is chosen too big, that is, we judge h s +1 , which is regular at p ′ , as being θ -singular at ˜ p ′ , we may get a perturbed polynomial h , which means that h ( p ′ ) = 0and h − h ( p ′ ) is regular at p ′ . Thus, we have:(a). if r = r , only the choice of θ affects our final result. Thus, we may get aperturbed system H ′ in this case.(b). if r < r , with a similar discussion with the case of 1(b), we get a perturbedsystem H ′ .(c). if r < r , we add h to { h , . . . , h s } directly and get a perturbed system H ′ .3. if θ is chosen too small, that is, we judge h s +1 , which is singular at p ′ , as being θ -regular at ˜ p ′ , we know that h = h s +1 is singular at p ′ . Thus, we have:(a) if r = r , only the choice of θ affects our result. Thus, the system H ′ isexact in this case.(b) if r < r , similar to the case of 1(b), we consider do the linear combination: g = h + s X j =1 α j h j . One difference from 1(b). is that h is singular at p ′ . Thus, all the g i = ∂g∂x ′ i are exactly vanish at q . So, the system H ′ is also exact in this case.(c) if r < r , we add h to { h , . . . , h s } directly and get an exact system H ′ .With the above analysis, we know that the choice of the tolerances θ and ε has aninfluence on our final deflated system: an exact deflated system or a perturbed system.To judge which case a final deflated system belongs to, we use the following judgmentmethod:Denote the input system as F = { f , . . . , f n } , the final deflated system e F ′ . Notic-ing that we use Newton’s method to refine the system e F ′ , thus, we denote Newton’siteration sequence as { ˜ p l , l ≥ } and the final certified zero ˜ p ′ . • First, we check if Newton’s iteration sequence { ˜ p l , l ≥ } is quadratic conver-gence. If not, we claim that our deflated system is a perturbed system. • If it is, we compute ∆ := max {| f i (˜ p ′ ) | | f i ∈ F , i = 1 , . . . , n } . Next, we give a tolerance θ ′ , which is usually a very small value, and comparethe magnitude of θ ′ and ∆. If ∆ < θ ′ , we regard the final deflated system e F ′ asan exact system; otherwise, e F ′ is a perturbed system.Of course, for the exact case, we are done. For the perturbed case, we hope to makeour final deflated system as accurate as possible by adjusting the values of θ and ε .However, we still do not have a good idea on how to distinguish the effect of the twotolerances θ and ε on the final system. Fortunately, noting that the tolerance θ is usedto accelerate our algorithm and is not necessary, therefore, according to the remark ofLemma 10, we can use the deflation construction 6 to compute the final system. Inthis case, we just need to consider the tolerance ε , which is used to judge the numericalrank of the Jacobian matrix. That’s to say, even if the first two steps of Algorithm1 are removed, our deflation construction process can still work well. Considering thepossible judgment, our final system can also be a perturbed system. Next, we give apossible modified method to overcome this case.let F = { f , . . . , f n } be the input system, ˜ p ∈ C n be the initial approximate zero.Let ε and θ ′ be the given tolerances. • First, assume rank( J ( F )(˜ p ) , ε ) = n . We apply Newton’s method on the system F and get the refined zero ˜ p ′ . Next, we check if Newton’s iteration sequence isquadratic convergence. If it is, we continue comparing the magnitude of θ ′ and∆. If ∆ < θ ′ , we regard F as a system with an isolated simple zero; otherwise,we know F is a system with a multiple zero and rank( J ( F )(˜ p ) , ε ) < n . • Assume rank( J ( F )(˜ p ) , ε ) = n −
1. After using the deflation construction inAlgorithm 1 once(from step 3 to step 10), we get a deflation system e F and anapproximate zero ˜ p . Then, we consider all the possibilities of rank( J ( e F )(˜ p ) , ε ).For every case, we go on our deflation construction in Algorithm 1 and use ourmentioned judging method to check which case the final deflated system belongsto. As long as the final deflated system is judged to be an exact system, we willstop our deflation process; Otherwise, we know rank( J ( F )(˜ p ) , ε ) < n − • Assume rank( J ( F )(˜ p ) , ε ) = n −
2. We consider as the case of n − Example 11.
Continue with Example 10. Here, we only use the tolerance ε = 0 . tojudge the numerical rank. First, we compute rank( J ( f , f )(˜ p ) , ε ) = 2 . hus, we consider using Newton’s method on the system F directly. Given an iterativeerror − , we have the following Newton’s iteration sequence: p i x y˜ p . − . p . − . p − . − . p − . − . p − . − . p − . − . p − . − . p . − . p − . − . p − . − . We can check easily that Newton’s iteration sequence { ˜ p j , j = 1 , . . . , } is linearconvergence. According to our judging criteria, we know that rank( J ( f , f )(˜ p ) , ε ) = 1 . Next, according to our construction process in Algorithm 1, we let g = f + αf and compute g = J ( g ) = α (2 x + 2 y + 1) + (1 /
20 + 2 x + 2 y ) , g = α (2 x + 20000 y ) + (2 x + 20000 y ) . We have ˜ α = − . .Next, let e F = { f , g , g } and ˜ p = (˜ p , ˜ α ) . By our given revised method above,we continue considering all the possibilities of rank( J ( e F )(˜ p ) , ε ) . For example, weconsider the case of rank( J ( e F )(˜ p ) , ε ) = 3 . Similarly, given the iterative error − , by using Newton’s method on the system e F ,we get the following iteration sequence: p i x y α ˜ p − . − . p − . p − . − . p − . It is easy to check that the iteration sequence { ˜ p j , j = 1 , , , } is quadratic con-vergence. Furthermore, given a tolerance θ ′ = 10 − , we can compute ∆ := max {| f (˜ p ) | , | f (˜ p ) |} = 0 and verify that ∆ < θ ′ . Thus, we regard the final deflated system e F ′ = e F as an exactsystem. At the same time, we stop our deflation process. θ and ε . Oncegiven a polynomial system F ⊂ C [ x ] with an isolated singular zero, we use Algorithm1 to compute a new system e F ′ ( x , α ), which has a simple zero. What’s more, accordingto the analysis of the tolerances θ and ε , our final system e F ′ is an accurate systemusually. For the perturbed case, we also give one ergodic way to adjust our final resultas accurate as possible. Thus, we can use the final system e F ′ to certify the isolatedzeros of the input system.In the following, by using the algorithm verifynlss in INTLAB[22], we give anexample to explain how we certify the isolated singular zero of the input system. Example 12.
Continue with Example 7. Applying the algorithm verifynlss , we getthe system: e F ( x , α ) = ˜ f = −
94 + 32 x + 2 x + 3 x + 4 x − x , ˜ f = x − x − x − x + 2 x x + 3 x x + 4 x x , ˜ f = − x + 4 x − x x , ˜ f = 3 + 32 α + α + 4 α − α x + 2 α x + 3 α x + 4 α x − α x , ˜ f = 2 + 2 α − α + 2 α x , ˜ f = 4 + 3 α − α + 3 α x , ˜ f = 4 + 4 α − α + 4 α + 4 α x − α x , and two verified inclusions X = [ 0 . , . − . , − . − . , − . . , . and A = [ − . , − . − . , − . − . , − . . For the deflated system e F ( x , α ) , we affirm that there is a unique isolated simplezero (ˆ x , ˆ α ) ∈ ( X , A ) , such that e F (ˆ x , ˆ α ) = . What’s more, the projection ˆ x of (ˆ x , ˆ α ) corresponds to the isolated singular zero of the input system F . That’s to say, wecertified the isolated singular zeros of the original system. We implement our method in Matlab of Algorithm 1. The code and some examplescan be found in . In this section, weshow the results of the experiment and the comparison of our method with some other26ethods. We do the experiments in Matlab R2012b with INTLAB-V5.5 on a computerwith Windows 7, Intel i GB memory.in [16], by modifying the method proposed by Yamamoto [18], they give a deflationmethod to compute a regular and square augmented system. which can be used to provethe existence of an isolated singular solution of a slightly perturbed system. Moreover,by applying INTLAB function verifynlss [22], they also give an algorithm viss tocompute verified error bounds. However, noticing that their method is essentially adeflation method. Thus, we also implement our algorithm based on INTLAB function verifynlss .In Table 1, we compare our algorithm VDSS with the algorithm viss . Theseexamples are relatively simple and small scale, which can be found in [2, 16]. Wealso list them in . We denote var the number of polynomials, mul the multiplicity and Verified acc the final verifiedaccuracy, which is measured by the breath of the verified inclusion X . And Max erris δ as mentioned in Section 4.2. We use a same initial accuracy 10 − for all theexamples. “true” means we get two same endpoints of the verified inclusion. Whenthe term for Max err is “0”, it does not mean Max err is exactly zero and only showsin Matlab machine precision.Table 1: Comparison of VDSS and viss for simple systemsSystem var mul
Verified acc Max err times Final sizeVDSS viss VDSS viss VDSS vissDZ1 4 131 true e-322 0 0.3066 0.3337 4 16DZ2 3 16 e-14 e-14 0 0.2989 0.7343 3 24DZ3 2 4 e-14 e-15 e-14 0.8780 1.0093 3 10cbms1 3 11 true e-322 0 0.1851 0.1107 3 6cbms2 3 8 true e-322 0 0.2546 0.1271 3 6mth191 3 4 e-14 e-14 e-32 0.3118 0.1221 4 6KSS 10 638 e-14 e-14 0 8.2295 0.3036 19 20RuGr09 2 4 e-323 e-14 0 0.1567 0.4955 2 8LZ 100 3 e-320 e-14 0 2.0197 13.3068 100 300Ojika1 2 3 e-14 e-14 0 0.7636 0.3447 5 6Ojika2 3 2 e-14 e-14 e-16 0.3936 0.2942 5 6Ojika3 3 2 e-14 e-14 0 0.3967 0.3427 4 6Ojika4 3 3 e-14 e-14 0 0.1851 1.0621 3 9Decker2 3 4 e-323 e-14 0 0.1752 0.4650 3 8Caprasse 4 4 e-14 e-14 e-31 2.0180 0.5126 6 8Cyclic9 9 4 e-14 e-14 e-15 5.9266 3.6878 12 18From Table 1, we can see that our algorithm is effective. On one hand, the verifiedaccuracy of our method is never worse than viss for all these examples. On the otherhand, thanks to our acceleration strategies, our practical size and computing time aresmaller than those of viss in most cases.We also compare our method with viss for large-scale polynomial systems. All the27xamples in Table 2 can be found in .The example LZ2000 can be found in [14]. The example nonpoly3 is a non-polynomialnonlinear system. We construct the other examples as below: First, we producesome polynomials randomly to form a zero-dimensional system { f , . . . , f n } , whichhas a simple zero p and deg( f i ) ≥ F = { f d i i + g i , ≤ i ≤ n, g i ∈ { f d ′ , . . . , f d ′ n n , } , d i ≥ , d ′ i ≥ , ≤ i ≤ n } . Thenew systems are always dense polynomial systems. The examples named simple1, re-duce3, big1, big2, big3, large3, large6, large8 are of the form that g i = 0(1 ≤ i ≤ n );The examples named addvar3, unre3, unre5, rankone2, rankone3 are of the form that g i are not all zeros. The ranks of the Jacobian matrices of the examples rankone2,rankone3 at the zeros both are one. In Table 2, “-” means there is no results with thecode. Table 2: Comparison of VDSS and viss for large systemsSystem var mul
Verified acc Max err times Final sizeVDSS viss VDSS viss VDSS vissLZ2000 2000 3 e-319 – 0 448.07 – 2000 –simple1 5 9 e-14 e-14 0 0.29 8.20 5 45addvar2 4 12 e-14 e-13 e-14 11.10 250.67 6 32reduce3 4 24 e-14 e-14 e-11 12.21 317.50 7 12unre3 4 36 e-15 e-14 e-13 4.08 360.32 4 32unre5 8 576 e-14 e-14 e-13 24.26 229.83 8 64big1 20 512 e-14 e-15 e-12 29.92 1724.09 20 160big2 20 8192 e-14 e-14 e-12 40.90 1751.61 20 160big3 30 196608 e-15 e-14 e-15 155.18 425.51 30 240rankone2 6 32 e-15 e-15 e-15 6.8693 1.5199 11 12rankone3 6 96 e-15 e-14 e-14 12.44 136.54 11 48breadth2 5 2 e-322 – 0 0.20 – 5 –large3 100 3 e-323 e-319 0 187.88 647.86 100 400large6 500 4 e-321 e-34 0 905.00 3262.78 500 2000large8 500 4 e-321 – 0 1745.85 – 500 –nonpoly3 3 64 e-322 e-14 0 0.19 6.62 3 36From Tables 2, we can see that for the examples with more variables and high mul-tiplicity, our method has a better result regardless of the verified accuracy, computingtime or the final scale.We also test the example in [11] with the form: { x − x − x , x + x − x , . . . , x n − + x n − − x n , x n } . The example named breath2 in Table 2 has this form for n = 5. Themethod in [9] can compute this example for n = 6 and it takes 659.59 seconds withthe final size for 321 variables and 819 polynomials. We test the cases for n = 6 , n =1000 and n = 2000 with our code, it takes 0.228965 seconds, 165.274439 seconds and1036.773847 seconds respectively without introducing new variables.For our method, although we introduce new variables, the size of our final deflatedsystem is small in experiments. And further, we also compare our method with theother four deflation methods [9] on the following four small systems.28: { x − x x x , x − x x x , x − x x x , x − x x x } at (0 , , ,
0) with µ = 131;2: { x , x y + y , z + z − x − x } at (0 , , −
1) with µ = 16;3: { x + 33 y − √ x − √ xy − √ y − √ x + 6 x y + 12 xy + 8 y + √ , x − y − √ x − x y + 6 xy − y + 12 √ xy − √ x − √ y − √ } at p ≈ (1 . , . µ = 5;4: { x + 2 x + 2 x + 2 x + x − , ( x + x − x − − x , (2 x + 5 x + 10 x + 5 x +5) − x } at (0 , , −
1) with µ = 18.The result (see also in [9]) is below, where method A is in [2, 12], method B is in[8], method C is in [5], method D is in [9], method E is our method VDSS . In Table3, we denote
P oly the number of the polynomials of the final deflation system and
V ar the number of the variables in the final deflation system. Noting that our finalsystem does not always contain all the polynomials of the input system, therefore, wewill contain the number of the different polynomials in the input system, which is notcontained in the final system, into
P oly .Table 3: Comparison of VDSS and other methods for four examples
Method A Method B Method C Method D Method E
P oly V ar P oly V ar P oly V ar P oly V ar P oly V ar
In Table 3, for system 1, 2 and 4, our method matches the best of the other fourmethods and simultaneously has a smallest deflated system in the five methods. Forsystem 3, although our final system has one more variable than method D, we haveless polynomials.
In this paper, we develop a new deflation method for refining or verifying theisolated singular zeros of polynomial systems. Given a polynomial system F ⊂ C [ x ]with an isolated singular zero p , by computing the derivatives of the input polynomialsdirectly or the linear combinations of the related polynomials, we prove constructivelythat there exists a final deflated system e F ′ ( x , α ), which has an isolated simple zero( p , ˆ α ), whose partial projection corresponds to the isolated singular zero p of theinput system F . New variables α are introduced to represent the coefficients of thelinear combinations of the related polynomials to ensure the accuracy of the numericalimplementation.Compared to the previous deflation methods, on one hand, our method also has anoutput size depending on the depth or the multiplicity of p in theory. On the other29and, thanks to the acceleration strategies we proposed in the paper, the size of thefinal system in our actual computations is much less than that we give in theory. Theresults of the experiments we conduct give a very persuasive argument for this.In order to essentially have a deeper understanding of our approach, we also givesome further analysis of the tolerances θ and ε we use. The results of the analysis tellsus that our final system is a perturbed system with a bounded perturbation in theworst case. To make our final system as accurate as possible, we also analyse the casethat the tolerance θ is not introduced. Acknowledgement
The work is partially supported by NSFC Grants 11471327.
References [1] B. Dayton, T. Li, and Z. Zeng. Multiple zeros of nonlinear systems.
Mathematicsof Computation , 80:2143–2168, 2011.[2] B. Dayton and Z. Zeng. Computing the multiplicity structure in solving polyno-mial systems. in Proceedings of the 2005 International Symposium on Symbolicand Algebraic Computation, M. Kauers, ed., ISSAC ’05, New York, NY, USA,ACM , pages 116–123, 2005.[3] M. Giusti, B. Salvy G. Lecerf, and J.-C. Yakoubsohn. On location and approx-imation of clusters of zeros of analytic functions.
Foundations of ComputationalMathematics , 5:257–311, 2005.[4] M. Giusti, G. Lecerf, B. Salvy, and J.-C. Yakoubsohn. On location and approx-imation of clusters of zeros: case of embedding dimension one.
Foundations ofComputational Mathematics , 7:1–58, 2007.[5] M. Giusti and J.-C. Yakoubsohn. Multiplicity hunting and approximating multipleroots of polynomial systems.
Contemp. Math. , 604:105–128, 2013.[6] G.Lecerf. Quadratic newton iteration for systems with multiplicity.
Foundationsof Computational Mathematics , 2:247–293, 2002.[7] W. Hao, A.J. Sommese, and Z. Zeng. Algorithm 931: an algorithm and softwarefor computing multiplicity structures at zeros of nonlinear systems.
ACM Trans.Math. Software , 40(1):Article 5, 16 pages, 2013.[8] J. D. Hauenstein and C. W. Wampler. Isosingular sets and deflation.
Foundationsof Computational Mathematics , 13(3):371–403, 2013.[9] J.D. Hauenstein, B. Mourrain, and A. Szanto. Certifying isolated singular pointsand their multiplicity structure.
In Proceedings of the International Symposium n Symbolic and Algebraic Computation, ISSAC ’2015 , ACM, New York, pages213-220, 2015.[10] F. Sottile J. D. Hauenstein. Algorithm 921: alphacertified: Certifying solutionsto polynomial systems. ACM Transactions on Mathematical Software , Volume 38Issue 4, August 2012.[11] A. Szanto J.D. Hauenstein, B. Mourrain. On deflation and multiplicity structure.
J. Symb. Comput. , 83:228–253, 2017.[12] A. Leykin, J. Verschelde, and A. Zhao. Newton’s method with deflation for isolatedsingularities of polynomial systems.
Theoretical Computer Science , 359:111–122,2006.[13] A. Leykin, J. Verschelde, and A. Zhao. Higher-order deflation for polynomialsystems with isolated singular solutions. in Algorithms in Algebraic Geometry, A.Dickenstein, F.-O. Schreyer, and A. Sommese, eds., vol. 146 of The IMA Volumesin Mathematics and its Applications, Springer New York , pages 79–97, 2008.[14] N. Li and L. Zhi. Compute the multiplicity structure of an isolated singularsolution: case of breadth one.
Journal of Symbolic Computation , 47:700–710,2012.[15] N. Li and L. Zhi. Verified error bounds for isolated singular solutions of polynomialsystems: case of breadth one.
Theoretical Computer Science , 479:163–173, 2013.[16] N. Li and L. Zhi. Verified error bounds for isolated singular solutions of polynomialsystems.
SIAM J. Numerical Analysis , 52(4):1623–1640, 2014.[17] A. Mantzaflaris and B. Mourrain. Deflation and certified isolation of singular zerosof polynomial systems.
In Proc. ISSAC 2011 , :249–256, 2011.[18] N.Yamamoto. Regularization of solutions of nonlinear equations with singularjacobian matries.
Journal of Information Processing , 7:16–21, 1984.[19] T. Ojika. Modified deflation algorithm for the solution of singular problems. i. asystem of nonlinear algebraic equations.
Journal of Mathematical Analysis andApplications , 123:199–221, 1987.[20] T. Ojika. A numerical method for branch points of a system of nonlinear algebraicequations.
Applied Numerical Mathematics , 4:419–430, 1988.[21] T. Ojika, S. Watanabe, and T. Mitsui. Deflation algorithm for the multiple rootsof a system of nonlinear equations.
Journal of Mathematical Analysis and Appli-cations , 96:463–479, 1983.[22] S.M. Rump. INTLAB - INTerval LABoratory. In Tibor Csendes, editor,
De-velopments in Reliable Computing , pages 77–104. Kluwer Academic Publishers,Dordrecht, 1999. 3123] S.M. Rump and S. Graillat. Verified error bounds for multiple roots of systems ofnonlinear equations.
Numerical Algorithms , 54(3):359–377, 2010.[24] J. Verschelde and A. Zhao. Newton’s method with deflation for isolated singular-ities.
Poster presented at ISSAC’04 , 6,July 2004.[25] Z. Zeng. Computing multiple roots of inexact polynomials.